Skip to main content
Medical Physics logoLink to Medical Physics
. 2016 Jun 15;43(7):4209–4262. doi: 10.1118/1.4947547

The report of Task Group 100 of the AAPM: Application of risk analysis methods to radiation therapy quality management

M Saiful Huq 1,a), Benedick A Fraass 2, Peter B Dunscombe 3, John P Gibbons Jr 4, Geoffrey S Ibbott 5, Arno J Mundt 6, Sasa Mutic 7, Jatinder R Palta 8, Frank Rath 9, Bruce R Thomadsen 10, Jeffrey F Williamson 11, Ellen D Yorke 12
PMCID: PMC4985013  PMID: 27370140

Abstract

The increasing complexity of modern radiation therapy planning and delivery challenges traditional prescriptive quality management (QM) methods, such as many of those included in guidelines published by organizations such as the AAPM, ASTRO, ACR, ESTRO, and IAEA. These prescriptive guidelines have traditionally focused on monitoring all aspects of the functional performance of radiotherapy (RT) equipment by comparing parameters against tolerances set at strict but achievable values. Many errors that occur in radiation oncology are not due to failures in devices and software; rather they are failures in workflow and process. A systematic understanding of the likelihood and clinical impact of possible failures throughout a course of radiotherapy is needed to direct limit QM resources efficiently to produce maximum safety and quality of patient care. Task Group 100 of the AAPM has taken a broad view of these issues and has developed a framework for designing QM activities, based on estimates of the probability of identified failures and their clinical outcome through the RT planning and delivery process. The Task Group has chosen a specific radiotherapy process required for “intensity modulated radiation therapy (IMRT)” as a case study. The goal of this work is to apply modern risk-based analysis techniques to this complex RT process in order to demonstrate to the RT community that such techniques may help identify more effective and efficient ways to enhance the safety and quality of our treatment processes. The task group generated by consensus an example quality management program strategy for the IMRT process performed at the institution of one of the authors. This report describes the methodology and nomenclature developed, presents the process maps, FMEAs, fault trees, and QM programs developed, and makes suggestions on how this information could be used in the clinic. The development and implementation of risk-assessment techniques will make radiation therapy safer and more efficient.

Keywords: process mapping, FMEA, FTA, risk-based-QM program

1. PREFACE

1.A. Guide to readers and regulators on use of the Task Group-100 report

This report of task group (TG) 100 on application of risk analysis methods to radiation therapy quality management (QM) is very different from most AAPM task group reports and therefore should be read and used in a different way than most task group reports. This preface addresses those differences by describing the general goals of the report, suggesting ways to read and use the report, and making comments on the use of the TG-100 report by regulators and regulations. The importance of reading and understanding the preface to this report cannot be overemphasized because the concepts and application of these concepts differ in important ways from previous task group reports and use of the methodology contrary to the principles discussed in this preface could lead to greater hazard rather than increased quality and safety.

1.A.1. Developing prospective approaches to radiotherapy quality management

Prescriptive approaches to technical quality management have served cancer patients well over the hundred-year history of radiotherapy. With a cancer incidence rate in excess of 1.6 × 106 per year in North America and estimated radiotherapy utilization rates of 50% for new cases and 20% for retreatment, approximately a million courses of radiotherapy are delivered per year in the USA. The vast majority of these are delivered safely and with considerable benefit to the patient. However, as a community, we must continue to search for ways to improve the quality and safety of the treatments we offer. Traditionally, quality improvement in our specialty has been driven largely by new technological advances, and safety improvement has been driven by reactive responses to past system failures. Clearly there is a synchronization problem here. The strategies presented in this TG-100 report provide a mechanism to enhance quality and safety, both for new as well as for established technologies and processes. It is imperative that we explore such a paradigm shift at this time, when expectations from patients as well as providers are rising while available resources are falling.

Prescriptive approaches to technical quality management, such as those promulgated by the AAPM and other professional organizations, will continue to play a role in the future. The development of these cornerstone documents has been based on the consensus opinion of experts in the field. However, with the adoption of prospective quality management techniques (techniques for designing safe clinical workflows in advance of their use) proposed by TG 100, such as failure modes and effects analysis (FMEA), we can envisage a future in which such technical quality management documents are informed by a more rigorous, although still subjective, analysis of the technology involved and of the causes and consequences of suboptimal performance. Familiarity with the prospective error management techniques discussed in this report will facilitate the transition to quality control protocols that are weighted towards those tests or workflows that may be more effective in preserving the safety of the patient and will potentially enhance clinical outcomes.

This report presents a change in approach. Until recently, the emphasis in radiotherapy quality management, particularly by the Medical Physics community, has been on the technical performance of radiotherapy equipment. In recent years, however, there has been increasing recognition that a major source of quality and safety impairment arises from weakness or variability in radiotherapy processes. Whereas, for example, there are a limited number of linear accelerator designs, there is very little standardization of processes between radiotherapy clinics. The high degree of commonality between Linac designs lends itself to the development of more or less generic machine quality control protocols, which, therefore, can be prescriptive. The wide variability in processes requires a much higher degree of customization that has to be carried out by those with intimate knowledge of the processes themselves. The techniques described in this document constitute a structured methodology for analyzing clinical processes and for developing clinic- and sitespecific quality management programs that more effectively and efficiently address work practices in individual clinics. Process mapping, failure modes and effects analysis, and fault tree analysis will assume more central roles in workflow design as we strive for greater safety and enhanced quality through the optimization of clinical processes. In other highly technical and highly regulated industries, such as nuclear power, prospective analyses, including these three techniques, has been an important component of facility design and operation.

Technology in radiotherapy is advancing at a pace that shows no sign of abating. Profession-wide, consensus-driven approaches to the maintenance of quality and safety in a rapidly changing landscape inevitably entail a time lag between the implementation of new technologies and the approved quality control protocols that should accompany them. Prescriptive approaches alone to quality management often do not address the huge variety of process and technique improvements and developments that help Radiation Oncology continually improve patient care. The prospective tools discussed in this document accommodate not only clinic-to-clinic variability in risk profile, but also provide a methodology for adapting a clinic’s quality and safety program to changes in technology and patient care. Use of these tools may produce a QM program that will save time, but, more likely, it will provide guidance enabling each program to direct resources toward achieving quality and safety in radiotherapy more effectively.

1.A.2. Reading and using the TG-100 report

As already described, the TG-100 report is quite different from most AAPM task group reports on quality assurance (QA). How a medical physicist should read and use the report is therefore also different. The major change is that the report attempts to teach a whole new way of thinking about the quality and safety needs of the radiotherapy planning and delivery process, and to propose a prospective and process-based analysis of the quality management needs of the radiotherapy process. The report describes (1) the rationale for prospective risk analysis; (2) how to perform process- and clinic-specific risk analysis and quality management program formulation; and (3) a detailed sample application of the method applied to a generic IMRT process.

While a typical AAPM Task Group report can often be used as a reference1–7 [for example, to look up the frequency for checks of the leaf position accuracy (IMRT) in Table V of Task Group 142 (Ref. 1)], the TG-100 report should not be used in that way. The detailed example analysis and the QA program developed from that analysis are both based on a default process modeled after that from the institution of one of the authors, and are examples to help the readers understand how to develop their own analysis. While the report attempts to provide a detailed and realistic example program, it is not appropriate to adopt that program into one’s own clinic. The failures, ranking, analysis, and QM program may form the basis for each institution’s analysis and QM program, but individualization of each of the steps in the process is the key to creating an effective and efficient quality management program for each clinic. TG 100 will help guide readers through that process to an appropriate quality management program for their own clinic.

This Task group recommends that AAPM and other organizations assist clinics with the implementation process by:

  • Forming task groups that develop guidance for implementing prospective analysis methods for specific clinical processes.

  • Providing local workshops to train AAPM members on efficiently applying the TG-100 methodology.

  • Providing more in-depth training (for example, establishing a website with model FMEAs for various procedures as the analyses are developed, provide web-based training and focused workshops such as the 2013 Summer School on quality and safety in radiation therapyc8).

  • Providing competitive funding for clinics to develop showcase prospective risk assessment implementations. Possibly attach a component to the receipt of funds requiring that the clinic educate others in their FMEA/FTA and other prospective risk assessment implementation.

Many investigators have published their experience on the application of failure mode and effects analysis in a radiation oncology setting.9–30 Individual groups that successfully apply the TG-100 methodology should publish their work (e.g., see the paper by Ford et al.9).

Successful extension of the current prescriptive QA methods to include the more prospective and risk-based methods proposed by TG 100 will take considerable time and effort from all involved. However, the complexity and pace of technological improvements that bombard the field of radiation oncology require that we implement the proposed methodology if we are to maintain or improve the safety of the patients and the quality of their treatments as we work to cure their cancer.

1.A.3. Suggestions for regulators and regulations related to prospective radiotherapy quality management programs based on TG-100 recommendations

TG 100 presents a methodology for establishing a facility’s quality management program where each facility determines the hazards and risks at their own facility based on their own processes and procedures. Other regulated industries utilize risk-based quality programs and regulators have developed techniques to review these types of programs. Examples exist in the nuclear power and aviation industries. Risk-based quality programs do not exclusively employ prescriptive lists of checks. An important advantage of a risk-based approach is that each facility can direct resources most efficaciously towards patient safety and treatment quality as needed. This results in varying quality management procedures, which may be a challenge for regulations. Regulators are invited to familiarize themselves with TG-100 principles, learn how to evaluate radiation therapy quality management programs developed using risk-based approaches, and how to determine if the programs provide the expected measure of safety (see Sec. 1.A.4 for important guidance for following the methodology in this report).

Risk-based QM procedure design has been mandated in the United Kingdom for some time. In the U.S., beginning in 2001 the Joint Commission (then the Joint Commission on Accreditation of Healthcare Organizations) mandated that healthcare organizations perform one proactive risk assessment on a high-risk procedure each year, and while not mandating FMEA as the only approach, based on the accompanying discussion of intent and the booklet on compliance techniques, it was clear that the expectation was that facilities would use FMEA.31,32 To facilitate performance of risk-based analysis, the Commission published an instruction manual on FMEA (now in the third edition).33 The Joint Commission clearly intended quality management for high-risk procedures to be determined through risk assessment, and this has become common in healthcare.

Most radiological regulators are familiar with the prescriptive task group reports from the AAPM making recommendations for radiotherapy quality assurance. These reports, for example the reports of Task Group 40,2 and Task Group 142,1 present lists of items to check. Some of these reports have been incorporated into regulations in some states. They have provided useful frameworks against which regulators could evaluate clinical quality assurance programs; whether or not the AAPM reports are cited in regulations. The licensing branches will have to work with licensees in developing amendments that are consistent with the proposed risk-based quality management methods and the transition to these new methods.

Members of TG 100, as well as many other investigators, have found that effective process QM requires active collaboration among all members of the radiation oncology team, including physicians, therapists, nurses, dosimetrists, and administrators as well as physicists. This report will contribute to a broad discussion among stakeholders on the design and implementation of radiation oncology QM programs. The goal of this report is to provide information and guidance to facilitate application of these methods in clinical practice. It is emphatically not intended for prescriptive or regulatory purposes.

1.A.4. Important guidance in following the methodology in this report

In establishing risk-based quality management, these guidelines should be followed:

  • 1.

    Do not make sudden, major changes in your quality program. Any differences in the quality assurance program between what comes from the TG-100 methodology and the conventional QA as recommended by task group reports or other guidance documents that would lead to deletion of QA steps needs to be very carefully considered and supported, and discussed with experts familiar with both the conventional QA and the TG-100 methodology. Compliance with regulation must be maintained regardless of any analysis.

  • 2.
    Start with a small project. Doing so serves several purposes.
    • First, it gives an opportunity to become accustomed to the techniques on a manageable scale.
    • Second, a small project has a higher chance of being completed while all involved are enthusiastic, and a successful completion of the first project will engender greater support for future projects.
    • Third, a small beginning project can provide experience that can help select subsequent projects. For many facilities, there never has to be a large project, just a series of small projects.
    • Fourth, processes are dynamic, changing over time. Over the duration of a large project the process under review may change.
  • 3.

    Critical facets of treatment should have redundancy. Redundancy gives protection against errors creeping into one of the systems.

  • 4.

    Risk-based QM is likely used in other parts of a hospital or clinic. The quality department may be able to provide assistance with early projects.

1.A.5. Highlighted recommendations to the AAPM to facilitate the use of the TG-100 methodology

  • 1.

    The AAPM should provide guidance to regulators for evaluating quality management programs in radiotherapy facilities. This guidance should be developed by a panel of experts including some members of TG 100 and the Conference of Radiation Control Program Directors (CRCPD). This guidance and the original TG-100 document should be disseminated to the rule-making, enforcement, and licensing units of all state and Federal radiation control agencies.

  • 2.

    The AAPM should give in-depth educational presentations on the new methodology for regulators at meetings of the CRCPD and of the Organization of Agreement States.

  • 3.

    The AAPM should establish a repository on its website for sample quality management programs that regulators could use to become familiar with what such programs would look like.

More recommendations to the AAPM are in the body of this report.

2. CHARGE AND SCOPE OF THE REPORT

Assuring the accuracy, efficacy, and safety of the physical aspects of radiation treatment is the major responsibility of the clinical medical physicist and one for which publications from the American Association of Physicists in Medicine [AAPM–Task Group (TG)-40,2 TG-43,3 TG-45,7 TG-53,4 TG-56,5 TG-51,6 TG-142 (Ref. 1)] and other professional societies34–39 provide continuing updated guidance. In general, these documents focus on device-specific evaluations—assessing the functional performance of radiotherapy equipment by measuring specific parameters at specified frequencies with tolerances set at strict but achievable values. However, since the 1994 publication of the AAPM Task Group Report No. 40,2 technological advances have greatly expanded the complexity of radiation therapy; there is a shortage of resources to deal with this ever-increasing complexity. Furthermore, recent public disclosures of radiation therapy incidents with catastrophic outcomes40 have prompted growing appreciation of the need to improve safety measures in the clinic. A number of analyses of events in radiation therapy41–44 find that they are far more often caused by flaws in the overall process that takes a patient from initial consult through final treatment than by isolated hardware or treatment planning system calculation errors detectable by traditional physics quality assurance (QA).

TG 100 was initially formed to address the problems posed by ever increasing implementation of new advanced technologies and the need for more effective ways to design physics QA. The Task Group’s initial charge was:

  • 1.

    Review and critique the existing guidance from the AAPM in documents such as TG-40, 56, 59, 43, 60, 64, and guidance from ACR and ACMP reports on QA in Radiation Oncology, ESTRO report on QA in radiotherapy, IEC publications on functional performance of radiotherapy equipment, and finally ISO guidelines on quality management and quality assurance. The objective will be to determine the specific areas that have been omitted and need better coverage and also develop a suitable general quality assurance program.

  • 2.

    Identify a structured systematic QA program approach that balances patient safety and quality versus resources commonly available and strike a good balance between prescriptiveness and flexibility.

  • 3.

    After the identification of the hazard analysis for broad classes of radiotherapy procedures, develop the framework of the QA program.

Given the rapid development of new technologies and treatment methods, after discussion with the AAPM Therapy Physics Committee, it was decided that TG 100 would address only the second and third items of the charge. While many tools exist for such analyses, the task group selected three industrial engineering risk assessment and mitigation tools—process mapping, failure modes (FMs) and effects analysis (FMEA), and fault tree analysis (FTA)—because of their widespread acceptance in high reliability industries. Intensity modulated radiation therapy (IMRT) is used as an example application of these tools.

The TG’s report begins with a review of some issues associated with traditional approaches to quality management in radiation therapy (Sec. 3), followed by a brief description of terminology and some of the major quality improvement tools used in industry, including process mapping, FMEA, and FTA (Secs. 4 and 5). Description of a methodology for designing a QM program in radiation therapy is then given (Sec. 6). Comparison of these methods with previous work and suggestions for future research and development and summary recommendations are given in Secs. 7 and 8. Section 9 is an example application of the general methodology. Members of TG 100, as well as many other investigators, have found that effective process QM requires active collaboration among all members of the radiation oncology team, including physicians, therapists, nurses, dosimetrists, and administrators, as well as physicists. We hope that this report can contribute to a broad discussion among all these stakeholders related to how we can design and implement more effective process QM in radiation oncology. The goal of this report is to provide information and guidance to facilitate application of these methods to IMRT and other treatment modalities in individual clinical practices. It is emphatically not intended for prescriptive or regulatory purposes.

3. PROBLEMS WITH TRADITIONAL APPROACHES TO QUALITY MANAGEMENT IN RADIATION THERAPY

3.A. Need to address the treatment processes comprehensively

Conventional approaches to radiation therapy QM mandate checks, with associated tolerance levels and frequencies, for each device used on patients throughout their course of treatment. A major deficiency of this approach is its emphasis on device-specific QA at the expense of errors related to inadequate process design, information flow, poor training, documentation, and poor matching of patient-specific checks against device vulnerabilities. Although many reported serious radiation therapy errors involve incorrect or inappropriate use of devices due to miscommunication or misunderstanding,44 traditional physics QA is generally focused elsewhere. While it is important that each device used in planning and delivering RT treatment should perform according to specifications and expectations, a better understanding of the interaction between the clinical processes, human users, individual devices, and impact of various “failures” on treatment outcome will help distribute resources more efficiently and effectively.

3.B. Excessive demand on physics resources

As treatment methods become more numerous, complex, and technologically intensive, the QM demands on medical physics resources continually grow. The recent update of the TG-40 QA requirements for Linacs by AAPM’s Task Group 142 (Ref. 1) increases the number of daily, monthly, and annual checks by over 60% each, mostly to account for technologies such as IMRT and on-board imaging that were not in clinical use in 1994.2 Medical physicists are required to maintain quality for existing technologies and develop procedures for safe and effective clinical implementation of new ones. They must perform acceptance testing and commissioning of the software and hardware used for treatment planning and delivery, establish QM programs for ongoing safe use of the devices, develop procedures that satisfy regulatory requirements, design and perform patient-specific tests to verify correct treatment delivery, and act as an educational resource to the general public and the radiation therapy community. These labor-intensive activities place a heavy demand on medical physicists. Yet the number and intensity of QM activities that an individual physicist can safely perform is limited by human performance abilities and the number of working hours in the day.10 Indeed, mental and physical overload have been linked to serious errors in many radiation therapy related incidents and accidents.45 It is thus desirable to consider new approaches to QM that are based on formal risk assessments and that may achieve improved quality and error reduction while providing guidelines for a better distribution of physics resources. Such approaches will identify “standard” QA activities whose frequency can be safely reduced and also identify areas where standard QA is inadequate. While the latter findings do not necessarily decrease the physicist’s workload, they provide a rationale for procurement of appropriate human and equipment resources.

3.C. Difficulty in developing a QM protocol that covers all permutations in clinical practice

Complexities in radiation therapy arise from the wide range of conditions treated, technologies used, and professional expertise needed. For example, there are currently more than seven IMRT delivery methods: “step-and-shoot,” “sliding window,” physical compensators, helical and serial TomoTherapy, and a variety of arc-based deliveries on conventional linear accelerators including constant and variable dose-rate “volumetric modulated arc therapy” methods. Details of IMRT treatments depend on disease site, department experience, technology available, and individual physician preference. This complexity is compounded by the multiple steps involved in the IMRT process, by intra and interdepartmental dynamics, by the variety of physical tests and measurements that have been published, by the continual changes brought about by clinical outcomes research, and the introduction of new technologies. It is a daunting (and likely impossible) task to develop a single QM protocol for the ever-widening range of possible treatment techniques and delivery equipment.

3.D. Delays in establishing accepted QM protocols for emerging technologies and associated processes

Professional organizations such as the AAPM work diligently to develop thoughtful and consensus-based QM protocols to deal with new clinical technologies. Unfortunately, the time scale required to develop consensus recommendations can be too long for a clinic that is under pressure to implement new therapeutic strategies when they become available for use by the broad clinical community. For such situations, the methods described by TG 100 will be helpful in developing safe and efficient processes and QM programs.

4. QUALITY AND SAFETY: AN OVERVIEW

4.A. Quality

The effects of failure to maintain quality range from clinically insignificant incidents [e.g., <5% (Refs. 46 and 47) variance in delivered dose compared with prescription] to catastrophic events [e.g., cases reported in The NY Times, NY (Ref. 40) leading to patient death]. The goal of a quality management program is to protect the patient from all such problems, though a feasible program is forced to concentrate on failures with detectable impact.

The term quality enters this discussion frequently. While often used in a general sense of “goodness,” a more precise definition is useful in risk assessment. Modifying the definition given by Juran only slightly,48 quality in radiotherapy consists of:

  • Those features which meet the needs of the patient, including rational medical, psychological, and economic goals while also taking into account the professional and economic needs of the caregivers and the institution.

  • A clinical process that is designed to realize cancer treatments that conform with nationally accepted standards of practice and specifications; and

  • Freedom from errors and mistakes.

    Not meeting the desired level of quality is a Failure. A specific process step can fail in different ways, each of which constitutes a failure mode. In discussing failure modes, as with quality, terms require more precise definition and use than in casual conversation. While even in the quality literature various definitions may be found, the following definitions have become widely accepted:

  • Errors—failures consisting of acts, either of commission (doing something that should not have been done) or omission (not doing something that should have been done), that incorrectly execute the intended action required by the process.

  • Mistakes—failures due to incorrect intentions or plans, such that even if executed as intended would not achieve the goal.

  • Violations—failures due to intentionally not following proper procedures, either as shortcuts with the intention of achieving the correct goal or sabotage.

  • Event—the entire scenario, including the failure itself and its propagation through the clinical process, resulting in a patient treatment of diminished quality.

  • Near event—a situation resulting from a failure that would have compromised quality of the patient’s treatment had it not been detected and corrected. Also known as close call, near miss, and good catch.

Failures may result from errors, mistakes, or violations. Many failures result in no detectable effects. Only when the effects rise to a detectable and significant level, which may happen months or even years after the failure, does the failure produce an event.

The causes of failures are usually complex and difficult to classify, but often contain components of human failure (mistakes or errors) and/or equipment failure. While not as easy to identify, organizational or design failures (called latent errors) refer to environmental, managerial, or organizational factors that cause human or equipment performance to deteriorate or increase the likelihood that such failures propagate into treatment. Examples of organizational failure include excessive workload; a noisy or distracting environment; or suboptimal access to information.

Sometimes failures occur despite the fact that medical electrical equipment design must address the concept of essential performance, which is defined by the International Electrotechnical Commission (IEC) as the performance necessary to achieve freedom from unacceptable risk (IEC 60601-1).49 While the design of systems can help minimize the frequency of failures, it cannot entirely prevent them. Risk management is the systematic application of management policies, procedures, and practices to the tasks of analyzing, evaluating, and controlling risk (IEC 60601-1).49 Risk assessment considers the way in which the quality of treatments can fail to achieve the desired goals. Quality management stands as the sentinel to protect the patient from the effects of failures.

4.B. Quality management: Components, functions, and tools

Quality management consists of all the activities designed to achieve the desired quality goals. According to Ford et al. quality management includes quality planning, quality control, quality assurance, and quality improvement.50 Two components of QM are the focus of this report: quality control (QC) and QA. Though many definitions for these concepts can be found in the literature, in this report we use the following:

  • QC encompasses procedures that force the desirable level of quality by48
    • evaluating the current status of a treatment parameter,
    • comparing the parameter with the desired value, and
    • acting on the difference to achieve the goal.
  • QA confirms the desired level of quality by demonstrating that the quality goals for a task or parameter are met.

Generally, QC works on the input to a process to make sure that everything that goes together in the process is correct, while QA assesses the correctness of the process output, as schematically shown in Fig. 1. A process, according to the IEC, is a set of inter-related resources and activities that transform inputs into outputs.49 Both QC and QA work to prevent bad outputs from passing out of a process. While an error in an input might result in a poor quality product, with QC in parallel with the input, both the input and its corresponding QC would have to fail for the failure to pass into the process. Similarly, were the process to result in a bad product, there would have to be a QA failure to allow it to propagate out of the process. Classifying a given activity as QC or QA can be complicated in most situations since the output from one process often becomes the input for the next.

FIG. 1.

FIG. 1.

Example of a fault tree. The figure shows a process with four inputs, each with QC to maintain the integrity of the process, and QA to provide confidence that the output of the process is correct. The red and green symbols represent “or” and “and” gates, respectively. Because an error in any of the four inputs can propagate into an error in the calculation, they all enter into the process through an or gate (red symbol). Parallel to each of the boxes indicating errors in the inputs are boxes indicating failure of QC associated with the process. Each of the “failure of QC” boxes enter an and gate (green symbol) with their respective error in input box. This indicates that for the error in the input to pass into the calculation process, there must be a concomitant failure of the QC that works on that input.

Both QC and QA interrupt the propagation of failures. In general, QC requires more resources than QA. In Fig. 1, preventing a process failure requires four QC activities, while a single QA activity can provide similar protection. However, identifying a failure during QC results in less wasted effort. A failure detected by QA requires investigation to determine its cause, followed by correction and repetition of the process with the corrected input. Thus, an efficient and robust QM program employs a mix of QC and QA, depending on variables like the time taken within the process, the number of inputs, and the probabilities of failures in the inputs. If QA frequently finds failures, resources ideally would be shifted to QC. If QA never (or very rarely) finds problems, the value of the QA step ought to be reconsidered.

Quality audits comprise another important part of the QM program. A quality audit is an assessment of the clinical process by means of a manual or automated review of representative samples of treatment records that is independent of the usual process. While there are several types of quality audits, those of most interest in radiotherapy include process and product audits. A process audit reviews the processes used, while a product audit may review patient charts (for example) to see if all of the required physics procedures were completed and performed correctly. Quality audits considered by AAPM Task Group 103 (Ref. 51) will not be discussed in this report.

4.C. Reactive approaches to safety

Reactive approaches to safety are employed once a failure is identified whether or not the effect of the failure penetrated through to a clinical treatment. The objective of a reactive approach is system improvement aimed at minimizing the risk of patient harm in the future due to a repetition of the particular observed failure mode. To reach this objective, it is necessary to identify the causes of the particular failure mode and, on the basis of these identified causes, to initiate appropriate changes in the procedures or the quality management program of the organization. This process of identifying the causes of the event constitutes a root cause analysis (RCA), which is a well-established approach to error management.52 As the term implies, the objective of an RCA is to trace the sequence of steps from the actual or potential clinical incident back to what started the chain of actions and conditions leading to the event. During the analysis contributing factors may also be identified.

An RCA takes the form of asking what and why at each decision point until the root cause(s) is identified. The RCA process should involve the entire treatment team to cover most effectively all perspectives and should include individuals close to the process or system in which the error occurred. Organizations that study approaches to quality and implement effective programs widely accept that a punitive or blaming culture is counterproductive as an error management strategy. An RCA therefore focuses on systems and processes rather than individual performance.

As well as charting the series of actions and observing the conditions that lead to the adverse event or near event, an RCA also involves an evaluation of the effectiveness of barriers and controls. Safety barriers, also known as critical control points, are any process steps whose primary function is to prevent errors or mistakes from occurring or propagating through the radiotherapy workflow. Conventionally, one would include barriers and controls as components of the QM program. Comprehensive incident learning systems can be built on the basis of an RCA (Refs. 44 and 53) and can formally include corrective actions and learning through feedback to the radiation therapy team.54 National incident reporting and learning systems have just become available for radiation oncology in the United States (https://www.astro.org/Clinical-Practice/Patient-Safety/ROILS/Intro.aspx; http://www.cars-pso.org).55,56 The systems assist clients to work through RCA, or can perform the analyses for the clients.

4.D. Prospective approaches to safety

As described in Sec. 4.C, RCA is a reactive QM tool that addresses failures that actually happened in an existing clinical process. By identifying root causes, process improvements can be proposed through the quality management program to minimize the probability of recurrence of the failure modes.9,57,58 In contrast, the goal of prospective risk analysis is to identify risky process steps before a failure happens. This is then followed by a design of a new process or modification of an existing process to reduce the likelihood that potential failures will occur or to increase the likelihood that they are detected before the desired outcome is compromised. The fundamental starting point of any prospective risk or hazard analysis is understanding the clinical process through the development of a process map, followed by comprehensive enumeration of potential failures that could occur at each step of the process. Typically, knowledge of such potential failures is derived from the expert team’s direct or shared experience of the process, including experience with RCA and other reactive QM tools. For failures outside of the treatment team’s knowledge base, tabulations of reported radiation therapy errors can be very useful.59

Prospective risk assessment is the process of analyzing the hazards involved in a process. Risk assessment tools are widely used to maintain quality in industry. Though there are differences between an industrial product development process and radiation therapy planning and delivery processes, there are also important similarities. Particularly in recent years, many studies have shown the benefits of risk-based industrial techniques to safety and quality in medical settings.60–63 The field of clinical pharmacy science, for example, has entirely revised its approach to quality management, in an attempt to decrease the number of prescription drug mistakes and errors, with impressive results.63 More recently, the process-oriented and risk-based analysis of emergency room procedures has been a major effort in the field of emergency medicine. The goal of these efforts is to establish an efficient program to maintain or improve quality in a reasoned and systematic manner without requiring ever-increasing resources for QM.

Although many risk assessment and process analysis techniques have been described in the literature (example: Kaizen, state analysis, etc.), this report employs widely used approaches and tools including: (1) Process mapping, (2) FMEA, (3) FTA, and (4) the creation of a QM program to mitigate the most important risks which were identified in the previous analyses. This approach and others appear to be directly adaptable to typical radiotherapy practices. Although clinics are not discouraged from using these other approaches, FMEA and FTA are highlighted in this report because the task group felt that this approach would be most effective.

The first task in this risk-assessment approach is to describe and understand each step in the process. Any method that clarifies the processes, including a simple list of the steps, can be used. Process map trees or charts can be very helpful, as they graphically depict the relationship among steps in the process. Section 5.A describes the development of process mapping and shows the TG-100 process tree for the IMRT process considered in this report. Process charts are logical flow or organizational charts, and process maps are any other diagrammatic illustration of how a process works.

After delineating the process, the next step is to assess the potential risks involved in that process. TG 100 used FMEA, since it is a relatively straightforward technique that requires a short learning period. FMEA is discussed in detail in Sec. 5.B; it moves through the process and considers, for each step, what could fail, how it could fail, what is the likelihood of failure, what is the likelihood that a failure would not be detected, and what the effects of failure would be. The overall risk of each identified failure mode is then scored, so that these failure modes can be prioritized.

The third step in the overall analysis is to evaluate the propagation of failures using a fault tree analysis. We have chosen to use a fault tree that gives a visual representation of the propagation of failure in the procedure as it helps identify intervention strategies to mitigate the risks which have been identified (as described in Sec. 5.C).

To generalize, FMEA guides the development of failure mode specific quality management activities while examination of the frequencies of progenitor causes, identified through the FTA, provides guidance on the relative importance of certain structural characteristics of a radiation treatment program.

Once the FTA has been completed, the final step is to determine how best to avoid the faults and risks that have been identified. This analysis is then used to craft a quality management program. A method for designing a QM program (Sec. 6) and an example application of this to IMRT will be discussed in Sec. 9.

5. TG-100 RISK ANALYSIS METHODOLOGY

TG 100 recommends a team-based approach that requires active participation of representatives from all treatment team member categories (physicians, physicists, dosimetrists, therapists, nurses, IT support, machine maintenance, administration, etc.). The team members contribute to the analysis of process steps and failure modes, especially those that involve their work. Because of variations in offered treatment techniques, available technology, physician training and preferences, staffing resources, regulatory environment, and other factors, each clinic is expected to have a unique process map, FMEA analysis, fault tree, and QM program. As shown later in this report, the FMEA is a risk-assessment tool that makes use of data, when available, as well as the experiences of people involved, so additional thought and analysis are required to address new techniques for which there are limited data and experience.

5.A. Process mapping

A process map (or chart or tree) is a convenient, visual illustration of the physical and temporal relationships between the different steps of a process that demonstrates the flow and inter-relationship of these steps from process start to end.

Figure 2 shows the TG-100 process tree that encompasses the major steps of the IMRT treatment process as agreed upon by the task group members, based on the process at the facility of one of the Task Group members. The trunk, which takes the patient from entry into the radiation oncology system through end of treatment, runs across the center of the tree. The main boughs, representing the major subprocesses, emerge in approximately chronological order. Further “branches” emerging from each bough detail the steps required in the subprocess represented by the branch; each branch may be further broken down into twigs and leaves, which describe finer details of the subprocess. The colored arrows show the flow of information or actual physical material from one major subprocess to another. For example, the purple arrows show how immobilization and positioning impact on steps further downstream, the cyan arrows show the downstream flow of anatomic information, and the dark green arrows show the transfer of initial images. Each step in the process tree must be performed correctly for treatments to be successfully conducted. Developing and understanding the process tree are essential to performing FMEA and providing the physicist and other team members an overview of the entire process that may otherwise be obscured by daily clinical tasks.

FIG. 2.

FIG. 2.

(a) An IMRT process tree, (b) magnified view of the initial treatment planning directive branch. The red numbers indicate (hazard ranking) the most hazardous 20%–25% of the steps as indicated by high risk priority number values. Steps with high severity hazards are shown in green. [See text and Sec. VIII (Ref. 64) for details.] A hazard is something that can cause harm. A risk is the chance, high or low, that any hazard will actually cause somebody harm.

When making a process tree, it is important to focus on the appropriate level of detail. Extreme detail obscures the flow and relationships. Too crude a map hides relationships and important steps. The decision of the scale of the tree is not irrevocable. The tree is only intended to be useful, and as it is used, steps can be added or the detail reduced until it becomes manageable and useful in understanding the process. Clearly, the whole radiotherapy team needs to be involved in deciding the key steps to be included in the process tree.

5.B. Failure modes and effects analysis

FMEA assesses the likelihood of failures in each step of a process and considers their impact on the final process outcome. FMEA has the goal of assessing everything that could possibly go wrong at each step—how likely a specified cause and resulting failure is to occur, how likely it is to be detected, and how severe its consequences might be. Failure at an individual step may have many potential causes and each failure may have a variety of consequences. For example, on the first treatment day, the patient may be positioned incorrectly relative to isocenter. This may be caused by potentially avoidable errors such as treatment machine or simulator laser misalignment, therapist error due to distraction, poor instructions or setup documentation in the chart, inadequate immobilization, or by events that are more difficult for caregivers to control such as organ motion or anatomical changes. The consequences of this positioning failure can range from negligible to severe, depending on the magnitude of the displacement from the planned isocenter, the treatment technique (stereotactic, conformal, or large fields), the proximity of critical structures and when in the course of treatment the error is detected.

As mentioned earlier, an FMEA is prospective in that it includes the predictions of the institution’s experts of events that have not occurred. In many cases, the frequency of occurrence and the probability of their detection must be estimated from local “near events” or anecdotal reports of events or near events at other institutions. The TG-100 FMEA was also performed with the assumption that there were no specific QA/QC measures in place. The rationale for this concept may be difficult to grasp at first as there are established QM measures associated with most of the analyzed steps and it is tempting to estimate likelihood of failure based on an existing QM program. However, assuming the absence of these QA/QC measures when performing the FMEA allows for a systematic, ground-up redesign of a QM program without possible confusion arising from the presence of existing measures, which may be misplaced or ineffective. Therefore, all risk probability estimates in this report were performed assuming that there were no specific QA/QC measures in place.

There are various steps to complete when performing a quantitative FMEA. These include the following:

  • 1.

    Identification of as many potential failure modes as possible, ways in which a process could fail, for each process step. Each process step can, and usually does, have several failure modes.

  • 2.

    Identification of as many causes as possible for the potential causes for each failure mode. Each failure mode can, and usually does, have several causes.

  • 3.

    Determination of the impact of each failure mode on the outcome of the process assuming that the situation in (2) is not detected and corrected during subsequent steps.

The TG-100 list of all failure modes, potential causes for each failure mode, and the impact of each failure mode on the outcome of different steps for the IMRT process is given in Appendixes C1–C3.141

For each failure mode, the multidisciplinary team performing the FMEA assigns numerical values to three parameters O, S, and D where

  • O (occurrence) describes the likelihood that a particular cause for the specified failure mode exists.

  • S (severity) describes the severity of the effect on the final process outcome resulting from the failure mode if it is not detected or corrected.

  • D (lack of detectability) describes the likelihood that the failure will not be detected in time to prevent an event. While past experience with QC or patient outcome studies might be available to guide the choice of the value for D, its selection will rely largely on expert opinion.

These three parameters are multiplied together to obtain a single quantitative metric called the risk priority number (RPN): RPN = OSD. RPN is a relative surrogate metric for the risk posed to the patient by undetected failures of the identified type. It increases monotonically with probability of undetected occurrence (OD) and the severity of its effects on the patient (S). The RPN values direct attention to failures that are most in need of QM and their component factors (O, S, D) help us see what features of the failure mode contribute most to the overall risk associated with it. Appendix A gives an example of how to perform an FMEA.

TG 100 has developed scales for the O, S, and D indices that are specifically tied to radiotherapy outcomes and observations (though other scales can be and have been used):65

  • O ranges from 1 (failure unlikely, <0.01%) to 10 (failure likelihood is substantial, more than 5% of the time).

  • S ranges from 1 (no danger, minimal disturbance of clinical routine) to 10 (catastrophic, whether from a single event or accumulated events)

  • D ranges from 1 (very detectable: 0.01% or fewer of the events go undetected throughout treatment) to 10 (very hard to detect, >20% of the failures persist through the treatment course).

A given process step may fail in several different ways, each with a different O and D. For example, the patient may be treated in the wrong location relative to isocenter because of hardware failure (e.g., incorrectly aligned lasers) or human factors (e.g., distracted or inadequately trained personnel). But these different failure modes have the same S, since the consequence for the patient is the same.

Table I describes the TG-100 terminology relating to severity. This severity scale is very specifically tuned to the needs of the radiotherapy environment. Note also that failures are relative to a practice standard or expectation that specifies the desired or expected outcome. For example, a prescription mistake or error is a significant deviation from a practice guideline, physician consensus, or equivalent. For failures downstream of prescription, the reference is the physician’s directive.

TABLE I.

Terminology relating to severity as used in the TG-100 FMEA.

Severity term S values Description
Wrong dose distribution 5–8 A failure in the delivery accuracy of the dose distribution that would be expected to increase adverse clinical outcomes (e.g., reduced tumor control or increased likelihood of moderate grade late toxicities) to a level that is statistically detectable in a large patient population. For definitive radiotherapy a variation in the dose to the target or organs at risk of 5%–10% from the practice standard is suggested
Very wrong dose distribution 9–10 A failure in the delivery accuracy of the dose distribution to an individual patient that is highly likely to cause a serious adverse clinical outcome (e.g., tumor recurrence or grade III/V late toxicity) in that individual patient. For definitive radiotherapy, a threshold of about 10%–20% from the practice standard is suggested, depending on biological sensitivities for the tissues under consideration
Wrong absolute dose 5–8 A specific type of wrong dose delivery error in which the relative dose distribution is correctly delivered but the entire dose distribution is incorrectly scaled, due to variation in dose to the prescription point or isodose line, e.g., caused by faulty machine calibration or MU calculation error. For definitive radiotherapy, a variation in the dose of 5%–10% is suggested
Very wrong absolute dose 9–10 A specific type of very wrong dose delivery error in which the relative dose distribution is correctly delivered but the entire dose distribution is incorrectly scaled, due to variation in dose to the prescription point or isodose line, e.g., caused by faulty machine calibration or MU calculation error. For definitive high dose therapy, a threshold of about 10%–20% is suggested
Wrong location for dose 5–8 A failure in delivering the dose to the correct location that would be expected to increase adverse clinical outcomes (e.g., reduced tumor control or increased likelihood of moderate grade late toxicities) to a level that is statistically detectable in a large patient population. The size of the difference in positioning that constitutes such a failure depends on the anatomy of the target and organs at risk and the defined margins, but, generally, differences of 3–5 mm between the locations of the reference and treated volumes is realistic for standard fractionation treatment
Very wrong location for dose 9–10 A failure in delivering the dose to the correct location in an individual patient that is highly likely to cause a serious adverse clinical outcome (e.g., tumor recurrence or grade III/V late toxicity) in that individual patient. The size of the difference in positioning that constitutes such a failure depends on the anatomy of the target and organs at risk, but, generally, differences of more than 5 mm between the locations of the reference and treated volumes or inclusion of excessive normal tissue in the treated volume would be classified as “very wrong location”
Wrong volume 5–8 A failure in delivering the dose to the correct target volume that would be expected to increase adverse clinical outcomes (e.g., reduced tumor control or increased likelihood of moderate grade late toxicities) to a level that is statistically detectable in a large patient population. Volume differences that constitute such a failure depend on the anatomy of the target and organs at risk, and correspond to a marginal miss of the target volume or partial irradiation of an OAR to a sufficiently high dose that statistically detectable increases in complications are likely
Very wrong volume 9–10 A failure in delivering the dose to the correct target volume in an individual patient that is highly likely to cause a serious adverse clinical outcome (e.g., tumor recurrence or grade III/V late toxicity) in that individual patient. Volume differences that constitute such a failure depend on the anatomy of the target and organs at risk, and correspond to a geographical miss of the target volume or irradiation of an OAR to a dose sufficient to cause a complication or treatment failure in the patients
Suboptimal plan 4 A treatment plan with characteristics unlikely to achieve the stated goals
Non-radiation-related physical injury 5–10 Injury resulting from causes other than radiation, for example, from physical trauma
Inconvenience-patient 2–3 Failures that inconvenience the patient, for example, requiring an otherwise unexpected trip to the radiotherapy facility
Inconvenience-staff or increased cost 1–2 Failures that inconvenience the staff, creating extra work, and cost of treatment or increasing stress

Setting boundaries between the different levels of severity adopted by TG 100 was necessary but is necessarily imprecise. Table I suggests reasonable locations in dose and space for where these boundaries might be set. While specific volume and dose tolerances were considered, the wide variety of clinical situations to be considered made rigid specifications very difficult to use.

The terms wrong volume, wrong dose distribution, etc. defined in Table I have considerable overlap in many cases. For example, a treatment with the isocenter at an incorrect location, in addition to delivering the dose to the wrong volume, could be considered to deliver the wrong dose distribution or the wrong absolute dose. However, the sense of this failure would be best captured as wrong volume. For most of the failures, the actual terminology for the effect is not critical for its quantification.

Table II describes the numerical categorization of O, S, and D agreed upon by the TG 100 members and used in the subsequent FMEA analysis examples.66 Using these scales, the RPN associated with a particular failure mode can range from 1 to 1000. It is worth noting that the individual O, S, and D scales are not linear but tend to be more logarithmic. These scales are able to deal with the wide range of severity, occurrence frequencies, and undetectability that must be accounted for in radiotherapy.

TABLE II.

Descriptions of the O, S, and D values used in the TG-100 FMEA.

Rank Occurrence (O) Severity (S) Detectability (D)
Qualitative Frequency in % Qualitative Categorization Estimated Probability of failure going undetected in %
1 Failure unlikely 0.01 No effect 0.01
2 0.02 Inconvenience Inconvenience 0.2
3 Relatively few failures 0.05 0.5
4 0.1 Minor dosimetric error Suboptimal plan or treatment 1.0
5 <0.2 Limited toxicity or tumor underdose Wrong dose, dose distribution, location, or volume 2.0
6 Occasional failures <0.5 5.0
7 <1 Potentially serious toxicity or tumor underdose 10
8 Repeated failures <2 15
9 <5 Possible very serious toxicity or tumor underdose Very wrong dose, dose distribution, location, or volume 20
10 Failures inevitable >5 Catastrophic >20

One of the challenging aspects of the RPN scoring system is the determination of an effective and usable severity score (S). After a great deal of work, the TG arrived at the current severity scale. While perhaps somewhat subjective, the TG found that making the S descriptions too specific made them harder to use. There are many failures that usually have medium severity (wrong) but can, in extreme cases, have S = 10 (very wrong); often the medium S situations have higher O’s and D’s than the very high S situations. By distinguishing between degrees of severity, evaluators are less likely to get hung up on the very high severities and this helps them focus on the more clinically relevant failure modes. This approach proved to be very useful to the task group, and is the one that is recommended.

In performing FMEA for IMRT, the TG members tried to identify all possible failure modes and the potential causes for each failure mode (given in Appendixes C1–C3).141 Among the most frequent causes identified by the TG are human failures, lack of standardized procedures, inadequate training, inadequate communication, hardware and software failures, inadequate resources, inadequate design specifications, and inadequate commissioning. All of these are also systems failures, some very directly, such as the lack of standardized procedures and inadequate training, while others, such as the human failures, propagate to events because of lack of safety barriers in the process. Even equipment failures often result from lack of commissioning, QA, or preventive maintenance.

5.C. Fault tree analysis

Fault trees complement process trees. A fault tree, as shown in Figs. 1 and 3, begins on the left with something that could go wrong (one of the failure modes). Figure 1 shows a schematic of a very simplified version of a fault tree. The entire fault tree that complements the IMRT process tree presented in Fig. 2 is given in Appendix E.141 Figure 3 shows a segment of the entire fault tree and describes what could go wrong in pretreatment imaging for target localization. The analyst asks what actions or events during the imaging process could directly cause incorrect localization. Possible failures that contribute to this include incorrect interpretation of images (e.g., incorrect windowing for FDG-PET), or scans not made accessible for radiation therapy planning in a timely fashion, incorrect patient positioning for imaging, or error in advising patients about special requirements such as fasting before FDG-PET, etc. A logical or gate joins the boxes representing these possibilities (called nodes) since any of these situations results in an erroneous or suboptimal treatment. From each of these boxes, the tree proceeds to the right asking what could cause a failure at the node. The questions keep probing the actions further upstream until at some point the causes for a node fall outside of the control of the department or facility. Some boxes could be joined by a logical and gate, indicating that the actions in all the input boxes to the gate must fail to result in the failure at the gate’s output. Such and gate connections often are results of a QM program. If an action is checked for correctness, then in order for an error in the action to propagate to the left, there must also be a concomitant failure in the associated check. Thus, and gates provide protection, while or gates open opportunities for error propagation. Studying the fault tree for a process or subprocess illustrates the paths that could lead to failures. To ensure error prevention, between each failure mode on the left side in Fig. 3 and the progenitor causes on the right side, there should be a quality management measure that would prevent that failure mode from propagating through the process. Typical QM measures include improving training, establishing policies and procedures, developing protocols, improving communication, and continued managerial support for these procedural matters. Well-designed commissioning procedures and more robust software and hardware would also be necessary. Additionally, the entire step would be followed by QA (typically peer review of the target and OAR volumes for the step illustrated in Fig. 3).

FIG. 3.

FIG. 3.

Example of a fault tree for determining what could go wrong in pretreatment imaging for CTV localization. (See text for details.)

The FMEA helps lay out the fault tree, and the fault tree provides a visual overview for an individual or department to see which steps in their practice are not covered by QM. The RPN and S values direct attention to the failures most in need of remedy. Looking at the trees it becomes clear that, although not every step needs QM in parallel, every step needs some QM to block the effects of failures from affecting the patient. In general, it is not a good idea to rely on a single QM step to interrupt the flow of failures. Although it is tempting to insert a QA step as an efficiency measure to block the propagation of errors from many steps combined, failure of that one QA step would leave the procedure completely unprotected. In addition, detection of the problem from that one QA step (1) may happen after many incorrect steps and much wasted effort, and (2) it may be hard to identify which of the upstream steps actually led to the problem, although this must be known in order to correct the problem. Thus, both QM program efficacy and overall process efficiency are enhanced by incorporating multiple QM measures along the way between a possible fault mode and the final process outcome. These redundant measures reduce the possibility of an error going undetected due to a failure in a single QM measure and, as described earlier, also provide an opportunity for detection of errors early in the process, thus avoiding wasted time and effort. Evaluation of the added QA/QC processes into the FMEA may help ensure that the checks will actually function appropriately.

6. TG-100 METHODOLOGY FOR DESIGNING QUALITY MANAGEMENT PROGRAMS IN RADIATION THERAPY

This section provides guidance for designing a quality management program for radiation oncology. No set of suggested QM programs fits all practices equally well and each QM program should be assessed in light of the risk assessment of the individual practice.

6.A. Establishing the goals of the QM program

A simplified goal for the QM program would be to deliver the correct dose to the correct location safely. To be useful, this goal requires more specific statements. Reasonable aims could be: (1) that the dose to all of the CTV be within 5% of that required for treatment of the given disease; (2) that the doses to critical organs remain below the specified limits for the treatment (realistically, the doses cannot always be held below tolerances for all toxicity); and (3) that the patient suffers no avoidable injury or toxicity. A secondary aim may be that no treatment leads to administrative problems, such as violating regulations.

The process of ranking the possible treatment failure modes during the FMEA often focuses attention on catastrophic failures, particularly on those with very high severity values. This need not be the case in general. Consider an example of errors along a continuous distribution (the error magnitude can take on values along a continuum, such as a dosimetric miscalibration). Figure 4 plots the values for the O, S, and D as a function of the percent error for this parameter in a hypothetical situation in which the frequency of occurrence and nondetection decrease systematically with increasing error magnitude and event severity. The figure also shows the RPN values, divided by 10 to fit in the same ordinate scale. The greatest risk occurs not for the greatest percentage error but in the midrange of the percentage error. If “error” is assumed to mean any deviation from the desired value, small errors may occur routinely, but have such low severities, that the associated “events” may not even be noticed. The treatment may produce toxicities within the range to be expected from the treatment; therefore the scale of S values was extended to include zero in the plot. As the error (i.e., deviation from desired value) increases, its probability of occurrence decreases, the severity increases, as does its likelihood of detection, which causes D to decrease. For example, if a dose distribution on a plan gives poor target coverage then the worse the coverage the more likely it is that someone will notice. An O value for the delivery of such a plan is taken as low since these occurrences are rare in good practice. Similarly the D value is taken as low because poor coverage is likely to be detected before the plan continues to treatment.

FIG. 4.

FIG. 4.

A plot of the hypothetical situation of various percentage errors in the FMEA parameters as described in the text. The RPN value has been divided by 100 to match the scale.

In principle, for any outcome that can be modeled as a continuous random variable, such a function relating the RPN and percent error would be developed, at least in an approximate manner, as input into the FMEA. Unfortunately, there is little hard data for most treatment parameters and no information of this nature was available to the Task Group. TG 100 recommends research in developing such relationships. It is hoped that the national event-learning systems55,56 will provide enough data to support the needed research.

6.B. Prioritizing the potential failure modes based on RPN and severity functions

Sorting the entries in the FMEA facilitates the QM design process as it helps prioritize the riskiest and/or most severe failure modes. Starting with two copies of the FMEA, sort one by RPN values and the second by S ranking because this helps focus attention on the most hazardous and severe steps. Both sorted lists are used in an identical manner so it makes no difference which one is used first. Prioritization of the most hazardous steps informs an efficient allocation of resources for analysis. Working down the list, at some point it may be judged that the resource implications of addressing potential failures outweigh the benefit, but at what level that occurs is often difficult to determine without first addressing the higher ranked concerns. The interventions designed for the high-ranking steps often address or modify many of the low-ranking steps.

6.C. Marking the riskiest and most severe steps in the process

This approach employs both the RPN and severity ranking in identifying the most hazardous steps in the clinical process. An effective method is to mark on the process tree the most hazardous steps, for example, the steps with the 20%–25% highest ranked RPN values (Fig. 2). Irrespective of the RPN values, the steps with high severity rankings should also be marked. For example, TG 100 chose a severity value of 8 for this cut-off. For process-tree steps with many highly ranked potential failures, the quality management design team should consider redesigning the process to eliminate or reduce the risk, with subsequent risk reanalysis. If redesign is impractical or would not reduce the risk, then further controls should be put in place.

6.D. Marking the same highest ranked steps on the fault tree

The same highest ranked steps can be indicated on the fault tree. As with the process map representation, this marking will assist in focusing attention on clusters of highest hazard.

6.E. Selecting QM intervention placement

Starting with the most highly ranked hazards, either by RPN or severity ranking, consideration is then given as to where to place QM interventions to address each failure mode. During this process, it is not necessary to correct all the upstream causes of the failure; however, whenever possible it is ideal to take corrective measures to reduce the probability of these causes. Further corrective actions might be taken to interrupt the propagation of that failure to prevent effects on the patient’s treatment. While the goal is to address the most hazardous steps first so resources are used most efficiently and effectively, when addressing a high-ranking step, it most often saves time to consider the other steps along the branch together since the actions used to address the highly ranked step may cover some of the lesser ranked steps with little additional cost.

6.F. Selection of appropriate quality management tools

Several quality management options exist to address an identified weakness but not all tools for QM are equally effective in preventing failures. For the quality assurance and quality control activities, the Institute for Safe Medical Practices (ISMP) ranked possible QM activities by effectiveness classes. While the original list addresses mostly medication errors, Table III gives their listings in a more general form with examples that might apply to radiation therapy treatment processes. In the table, the lowest numbers indicate the strongest tools. The greatest efficacy lies in forcing functions, such as interlocks and physical barriers that prevent actions inconsistent with the goals of a process. Automation, for example, can eliminate errors due to transcription or entry of out-of-range values in preparing a plan for treatment. These methods illustrate that the most effective mechanism for preventing a failure is to redesign machine-operator interfaces to eliminate situations where the failure could occur. However, implementation of forcing functions is often not possible or practical in individual clinics because equipment used to treat patients is not designed to incorporate every possible forcing function or automation to prevent incorrect treatment planning and delivery. A more feasible alternative is to redesign and simplify procedures, eliminate unnecessary steps, and clarify communication, thereby eliminating the potential for entire classes of errors. Part of a redesign includes correcting deficiencies in the environment, such as improvements in lighting or reduction in the background noise levels. Independent review of proposed standardized procedures by an experienced colleague from another institution also constitutes a valuable part of the design process. Following a redesign, updating the FMEA becomes necessary since the new process could create some new unexpected hazards.

TABLE III.

Ranking of QM tools based on the effectiveness with examples, in part following the suggestions of ISMP (Ref. 67). The lower numbers are the most effective.

1. Forcing functions and constraints 5. Rules and policies
• Interlock • Priority
• Barriers • Establishing/clarify communication line
• Computerized order entry with feedback • Staffing
• Better scheduling
2. Automation and computerization • Mandatory pauses
• Bar codes • Repair
• Automated monitoring • PMI (preventive maintenance inspection)
• Computerized verification • Establish and perform QC and QA (hardware and software)
• Computerized order entry 6. Education and information
3. Protocols, standards, and information • Training
• Check-off forms • Experience
• Establishing protocol/clarify protocol • Instruction
• Alarms
• Labels
• Signs
• Reduce similarity
4. Independent double check systems and other redundancies
• Redundant measurement
• Independent review
• Operational checks
• Comparison with standards
• Increase monitoring
• Add status check
• Acceptance test

The strategies discussed above can be viewed in the context of Table III which is based on the suggestions of the ISMP.67 It should be noted that education, while at the bottom of the list in Table III, is essential for correct planning and execution of procedures. However, even with the best training, humans fail, and relying on education to prevent all failures proves less effective than the more highly ranked tools. Redundancy, independent checks, and operational checks (periodic QA) fall in the middle of the list yet serve important functions in radiotherapy QM. When possible, the most effective tools should be used, but resources and practicality often lead to tools from later in the table. Used judiciously, any of the tools provide value in controlling quality and preventing the propagation of failures.

After performing a process for some time, re-evaluation of the process itself provides a refined picture of the effectiveness of the quality measures instituted and identifies new hazardous steps. Input for the re-evaluation comes principally from three sources:

  • Records of events, failures, and near events. Establishing a reporting system and database for events and capturing information from root-cause analyses can add a-posteriori statistical data to the a-priori estimates used by FMEA.68 Such data can also uncover problems either not recognized during the FMEA or created by the quality program. Reports of events focus not on potential failures but failures that happened, giving much more power to actions established to prevent future failures. Near event reports prove extremely valuable. During near events, failures occurred but some actions prevented the failures from compromising the patient’s treatment. The actions that saved the situation from affecting the patient give insights into what actions might be effective in intercepting failures in future cases. Reporting systems also can be open to receiving information on hazardous situations noted by the staff before developing into failures. All three types of information can assist in refining the quality management program.

  • Quality audits. A quality audit consists of having knowledgeable people such as IROC-Houston or accrediting bodies such as American College of Radiation Oncology (ACRO), American College of Radiology (ACR), and ASTRO that accredit radiation oncology programs review the program (internal audits use people from the facility, while less frequent external audits use people from outside). The audits include a “product” audit, which in a medical setting would consist of reviewing cases and assessing whether all patients’ care was appropriate and complete, and a process audit that reviews the standardized procedures and evaluates whether they function well in the setting. The audits might also include measurements, either on site or performed remotely, to assess the accuracy of treatment unit calibration or other operating parameters.

  • Quality improvement. The information from the event database, the audits and the QA and QC procedures, serve as input for quality improvement, QI. QI identifies parts of the quality program that need changes or enhancement and parts of the process that would benefit from redesign.

Appendix A provides practical guidelines to assist the community in implementing the techniques discussed in this report in a reasonably consistent manner. Appendix B has been designed and formatted for educational and training purposes. As such, it is to some extent, self-contained and could be distributed to graduate students, residents, and colleagues who wish to be introduced to the practicalities of the techniques. Section 9 provides an extended example of the risk analysis methodology described above, designing a comprehensive QM program for the IMRT treatment process. Since many institutions offer IMRT, TG 100 hopes that readers will find this guidance helpful in performing their own risk analyses for improving the quality and safety of their own IMRT planning and delivery processes.

7. COMPARISON WITH PREVIOUS WORK

Quality embodies the notion of freedom from harm to the patient. Prior to and during the period of TG 100’s deliberations, several publications relevant to the present work have appeared or are in press. These can be grouped into those that address quality assurance from a perspective more familiar to the medical physicist and those which focus on safety issues in radiation therapy. The former approach has tended to be device centric while the latter has been biased towards the failure of processes. The work of TG 100 can be seen as being positioned, and to some extent bridging, between these two groups.

The most up-to-date document on traditional quality assurance approaches to radiation therapy is the AAPM’s Task Group 142 report.1 This recently published report has been developed from the well-known TG-40 document and other international publications.38,39 In keeping with the approach of these predecessor documents, TG-142 recommends specific tests, their tolerances, and frequencies. TG-142 has expanded on TG-40 through the inclusion of newer technologies and techniques such as multileaf collimators and IMRT. The menu-driven approach of TG-142, in which different tolerances and frequencies are recommended for different clinical activities, e.g., 3D CRT vs IMRT vs stereotactic radiation therapy, is a welcome innovation. TG-142 acknowledges the resource implications of their proposed quality assurance program but recommends that it be adopted until methods such as those described by TG 100 supersede the TG-142 report.1 Other prescriptive and/or process specific QA documents that have been published are the reports of TG 148,69 TG 135,70 TG 101,71 and ASTRO white papers.72–76

The alternative and complementary viewpoint is to define quality as achievement of the goals of the therapy. This is the perspective adopted by TG 100. The previously published work closest to that described in this document is that by Ford and colleagues9 who have performed a failure modes and effects analysis for the external beam radiation therapy service in a radiation oncology setting at Johns Hopkins University. Their effort started with a process map, with 269 different nodes, that was developed by a multidisciplinary team including medical, scientific, nursing, and technical staff whose activities impact patient care. Their FMEA scoring system for O, S, and D was slightly different from that adopted by TG 100, Table II, although a range from 1 to 10 was used for each of these quantities. In their work, they found 159 potential failure modes. The highest risk priority number this group estimated for any failure mode was 160, considerably lower than the highest found by TG 100. Ford et al. have provided examples of how selected failure modes can be used to improve processes, to reduce O, and enhance quality control, to reduce D. The paper contains a useful discussion section describing, amongst other things, the authors’ experience with conducting a comprehensive FMEA. Other works consistent with the TG-100 philosophy dealing with RT applications have been reported in the literature.10–12,14–30

Quality in radiation therapy from a more qualitative perspective than that of TG 100 has been the subject of several additional recent publications. A consortium of UK professional bodies together with the National Patient Safety Agency has developed a document entitled “Towards Safer Radiotherapy.”42 The consortium’s approach was to develop a set of 37 generic recommendations through consensus of expert opinion. Interestingly, both TG 100, through the examination of postulated failure modes in the IMRT process, and the UK document, through a more qualitative consensus, arrived at similar conclusions regarding high risk failure modes and causes and proposed similar, specific quality management interventions for addressing them. To prevent failures in radiation therapy in general (and IMRT in particular), a QM program should have elements that TG 100 refers to as key core requirements for quality. These core requirements are:

  • Standardized procedures.

  • Adequate staff, physical, and IT resources.

  • Adequate training of staff.

  • Maintenance of hardware and software resources.

  • Clear lines of communication among staff.

As pointed out above, safety is a subset of and a prerequisite for quality. It is therefore not surprising that the recommendations coming out of the UK group, although aimed particularly at safety, would, if adopted, concurrently enhance both the quality and the safety of a clinical radiation therapy operation.

The World Health Organization (WHO) has recently published its “Radiotherapy Risk Profile,”43 based on an evaluation of reported actual and near event radiation therapy incidents across the globe. From this assessment, the authors of the WHO document have developed a prioritized list of interventions and safety processes. Again there is much overlap between the WHO list and the quality management steps identified in the TG-100 report. A particularly prominent strategy in the WHO list is a planning protocol checklist. And recently, AAPM published a Medical Physics Practice Guideline to facilitate the development of checklists for clinical processes.77 TG 100 agrees with both the WHO document and the MPPG’s general guidance for constructing checklists. Tables IV–VIII are items that the TG-100 FMEA analysis indicates should be included in checklists for particular activities in the IMRT radiation therapy process.

TABLE IV.

Example checklist for standardized, site-specific protocol for workup of patient prior to IMRT treatment planning. Example content of a set of standardized, site-specific procedures that can serve as basis for QM checks of simulation and other imaging used to build patient anatomy model, anatomy contouring, treatment planning, and the initial planning directive subprocesses. The failure mode steps addressed [see Appendix C1 (Ref. 141), “FMEA by Process”] are listed after each procedure.

Identify site, stage, histology, etc., and other pretreatment characteristics that define the indications for selecting this protocol. FMs of steps 4 and 14–16
Specify overall clinical treatment plan, including other RT (e.g., brachytherapy) and other treatments (chemotherapy, surgery)
Provide site-specific special clinical instructions (e.g., dental consult for head and neck cancer; implantation of fiducial markers). FMs of steps 13 and 17
Specify patient-specific requirements (pacemaker, contrast allergies, bladder/bowel prep, etc.) FMs of steps 13 and 17
Investigate previous radiation treatment history. FMs of steps 3 and 48
Specify additional required imaging procedures (e.g., MR, PET, 4D CT), with sufficient detail that desired image set can be performed or identified unambiguously. FMs steps 25–31
Specify simulation instructions: Position, immobilization used, nominal isocenter position, top and bottom of scanning region, and special instructions (contrast, voiding, fasting, etc.). Note deviations from standard. FMs steps 4–7 and 11–174
Specify procedures for performing multi and unimodality image registration (e.g., primary and secondary image sets, automated or manual registration, and registration landmarks). FMs steps 25–29, 42, 43, and 57
Specify nomenclature and procedures to be used (RTP contour colors and names) and segmentation procedures. FMs steps 62, 66–69, and 80–86
Provide standard nomenclature and procedures for OAR and targets (e.g., CTV1 = 1 cm expansion of GTV1 and CTV2 = electively treated lymph nodes). FMs steps 62, 66–69, and 80–86
Specify who (dosimetrists, attending physician, resident physician) is responsible for contouring each structure
1. Special instructions for segmenting GTVs and CTVs, avoidance structures for optimization, OAR for evaluation
Specify uncertainty management techniques, (e.g., IGRT motion management). FMs steps 63–65
Specify PTV margins for all target structures. FMs steps 63–65
Specify total prescribed dose and time-dose-fractionation schedule intended for each of the CTVs to be treated
Specify IMRT class solution (field arrangements, energies, additional avoidance structures, etc.) FMs steps 89–104
Starting planning/optimization constraints and goals (e.g., DVH metrics for optimization) FMs steps 89–104
Specify plan evaluation metrics (e.g., graphic isodoses, DVHs) for targets, OAR, overall distribution. FMs step 127

TABLE V.

Example checklist for preparation of patient data set for treatment planning suggested by failure modes in steps 18–21, 34, 45, 37, and 49–79 of Appendix C1 (Ref. 141).

Image datasets input into the planning process are checked for correct dataset choice (correct study of the correct patient), documentation, quality, etc.
Documentation of isocenter coordinates, measurements, patient positioning, etc. from the simulator is provided
Images verified for correct orientation
For image registration cases: Primary and secondary datasets (for registration) selected, achieved registration accuracy noted
Deviations or compromises in the registration are noted
Organs at risk are contoured according to departmental guidelines
Correctness of all 3-D representations (voxel descriptions, surfaces, etc.) is verified
Expansion of GTV to CTV follows site-specific protocols; automated margins worked correctly; variations noted
Expansion of CTV to PTV follows site-specific protocol; automated margins worked properly; variations noted
Boolean structure checks: Input to structures checked, regions created visually reviewed
Image artifacts (e.g., contrast, metal) corrected per department protocols
Patient support devices (e.g., immobilization, skin markers, couches) are correctly included or excluded
Treatment planning instructions are clear and unambiguous
Preliminary prescription written
Optimization goals and limits are specified, and applicable departmental or other protocols are used
Special instructions are written as in the department policy
Initial directive includes statement of previous treatment, review of previous treatments requested; the prescription accounts for any previous treatments

TABLE VI.

Example checklist for physics check of treatment plan suggested by FMEA failure modes of steps 81–173, Appendix C1 (Ref. 141).

Dose prescription and planning constraints used for planning/optimization are consistent with site-specific protocol (example checklist 1) or with plan directive
Correct selection of ROIs: Correct use of overlapping and non-overlapping structures in optimization; best choices of beam energies and modalities have been made
Doses from previous treatments were accounted for in the plan
Optimization goals were achieved or failure understood, discussed with the radiation oncologist and acknowledged □ Yes  □ No
Dose calculation algorithm and density correction (algorithm, on/off) are correctly chosen
Dose distribution is reasonable for the plan and anatomy
1. PTV coverage consistent with initial planning directive or site-specific protocol (example checklist 1) or deviations discussed with the physician(s) and formally accepted
□ Yes  □ No
2. The doses to OARs within tolerances (as specified by site-specific protocol) or deviations reviewed with physician(s) and formally accepted
□ Yes  □ No
Plan agrees qualitatively with experience in similar cases
Overall plan includes separate boost or concomitant boost doses per prescription.
Plan accounts for specified immobilization, localization, and positioning methods
Verify documentation of use of bolus, type, and location
Verify that beams are deliverable
1. Deliverable MLC patterns used in the final plan; leaf-sequencing parameters correct
2. Monitor units within deliverable ranges
The plan prescription and treatment plan information have been downloaded to the correct course in the delivery system database
4D plan remains within the reliability limits of the system

TABLE VII.

Example checklist for Day 1 QM measures prior to treatment-as suggested by failure modes in steps 174–193, Appendix C1 (Ref. 141).

Patient and treatment plan to be used are identified correctly using two forms of identification check and time-out procedure
The prescription is complete, signed, and unambiguous, both in the chart and treatment delivery system
The delivery system has the correct version of the plan for the correct patient
All treatment parameters are correct in the delivery system computer or paper chart; transmission factors for accessories accounted for per department policy
An independent physics check of the plan has been performed and acceptance criteria satisfied per department policy
Patient set-up is clearly specified in the electronic and/or paper record
All immobilization, positioning, or motion management used are used correctly
Planned shifts from simulation marks made correctly
Other set-up instructions, such as bladder filling and bolus, are correctly documented and followed
Other set-up specifications are noted in the computer and paper records
The order of fields in the delivery system computer, and freedom (or not) to allow automated delivery is handled as required by department policy or machine limitations
The localization images or other image guidance parameters obtained during setup match the planned images or values within the tolerances (from site-specific protocol)
Localization images and final localization information checked and approved by the physician
Shifts from this imaging are carried out and clearly recorded

TABLE VIII.

Example checklist for dosimetric and treatment delivery chart checks for IMRT patients suggested by the FMEA for Day N steps, Appendix C1 (Ref. 141). These items, at least, should be checked regularly during the patient treatment course. For standard fractionation (1.8–2 Gy/fraction, for 5–6 weeks of treatment), weekly checks are typically necessary. For compressed treatment schedules, these checks must happen more often; for short courses, checking before each treatment might be necessary. The checks required are as follows.

Confirm that the patient delivery script information or files are unchanged through the course of treatment, unless planned changes are implemented
If changes are requested, confirm that they were correctly implemented, and are reasonable and justified to satisfy the overall prescription for the treatment
Verify that all treatments are correctly documented and recorded
Comparison of dose to date with the prescription and planned end of treatment
Review of treatment delivery system interlocks, overrides and problems, determination of the reason for these problems, and analysis of the need for corrections or other responses
For standard fractionation: Patient’s weight (therapist or nurse)
Review of recorded patient setup position, positioning shifts, image guidance decisions, and review of table position overrides and other indicators of shifted position
Use of all noninterlocked accessories (blocks, compensators, bolus, etc.) correctly documented on a daily basis

Thus, the contrast between the WHO document and this report is the focus on specific process steps and failure modes by TG 100 as compared to the more generic recommendations of the UK group and WHO group. For information on the design and effective use of checklists, see Fong de Los Santos et al.77

Finally, two additional relevant documents are “Preventing accidental exposures from new external beam radiation therapy technologies” published by the International Commission on Radiological Protection (ICRP 112)44 and Safety is No Accident by ASTRO.78 Although ICRP 112 is focused on new technologies, many of its observations and recommendations are applicable to current technologies. The ICRP 112 examines, in some detail, 11 radiation therapy incidents, many of which are familiar through the popular media, professional and scientific publications. Through an analysis similar to root cause analyses, the authors identified important generalizable “lessons learned” and made a series of recommendations to enhance the safety of radiotherapy. Similar to the UK and WHO reports, the major structural and environmental contributing causes of system failure included documentation, training, and communication. A notable feature of the ICRP report is a chapter on “Prospective Approaches to Avoiding Accidental Exposures.” The work of TG 100, of course, focuses exactly on such approaches. The UK document also recommends undertaking a risk assessment when a new or changed treatment technique of process is to be introduced. The document “Safety is No Accident”78 was designed to address the specific requirements of a contemporary radiation oncology facility in terms of structure, personnel, and technical process in order to ensure a safe environment for the delivery of radiation therapy.

Complementary to these generic approaches to enhancing quality and safety there is also literature reflecting specifically the physician perspective.79

As noted above, there are common themes running through many if not all of the recent publications42–44 on safety and quality in radiotherapy. These include training, documentation, communication, and both reactive and prospective approaches to error management. If we accept these as prerequisites for a state of the art clinical QM program, then we need to provide staff with the tools to put them in place. We should no longer assume that we can all write clear and unambiguous documentation or that we are effective and committed communicators or that we can perform risk assessments that benefit the quality and safety of care. These prerequisites, identified in this document and those referenced above, need to be incorporated in training programs for all radiation oncology disciplines as recognition of their significance as components of a culture of quality and safety.

8. RECOMMENDATIONS FOR APPLYING RISK ANALYSES IN RADIATION THERAPY

Implementation of risk-based quality management methodology in radiation therapy recommended by TG 100 will seem daunting to many. The members of TG 100 had to climb a significant learning curve during this project. However, once the basic principles are understood and the process is completed for one clinical area or process, development of risk-based QM programs for other clinical applications becomes significantly more efficient.

8.A. To individual clinics

Dedicating time for a diverse group to learn and put together an initial risk-based QM program is a significant resource commitment. However, development of a QM program without a sound multidisciplinary understanding of the entire clinical process can lead to an ineffective and/or inefficient QM program.

  • It is recommended that each clinic’s radiation therapy delivery team, consisting of radiation oncologists, medical physicists, dosimetrists, therapists, nurses, engineers, and IT personnel as appropriate, develop a comprehensive risk-aware QM program for all clinical processes, especially in the analysis of steps that are related to their clinical duties and to the procedure as a whole. The Task Group recognizes that this would occur over time and would require additional education and culture changes in many clinics.

  • Once a facility commits to implementing the QM program as described in this report, it is recommended that they start with small projects to build experience with the tools, establish communication patterns with the QM team, and gain confidence. Working through a facility’s procedures in a series of small projects avoids feeling overwhelmed and the discouragement of having the project drag on for a long time.

  • Many QM measures indicated by the risk-based analysis will enhance, not deviate from, safe practice. These may change workloads, processes, and thus require convincing and educating personnel. Examples include developing written procedures and educating staff to follow them, and implementing “contouring rounds” for physicians. For a major change, such as drastic changes in machine QA schedules, the TG advises extreme caution. Any differences in the quality assurance program between what comes from the TG-100 methodology and the conventional QA as recommended by task group reports or other guidance documents that would lead to deletion of QA steps needs to be very carefully considered and supported, and discussed with experts familiar with both the conventional QA and the TG-100 methodology. Compliance with regulation must be maintained regardless of any analysis.

  • Start with a small project. Doing so serves several purposes.
    • First, it gives an opportunity to become accustomed to the techniques on a manageable scale.
    • Second, a small project has a higher chance of being completed while all involved are enthusiastic, and a successful completion of the first project will engender greater support for future projects.
    • Third, a small beginning project can provide experience that can help select subsequent projects. For many facilities, there never has to be a large project, just a series of small projects.
    • Fourth, processes are dynamic, changing over time. Over the duration of a large project the process under review may change.
  • Critical facets of treatment should have redundancy. Redundancy gives protection against errors creeping into one of the systems.

  • Risk-based QM is likely used in other parts of a hospital or clinic. The quality department may be able to provide assistance with early projects.

The AAPM recognizes that development and adoption of risk-based, individualized QM programs would be a significant paradigm shift and that large-scale implementation is a long term process that will require close cooperation among individual physicists and physicians; healthcare managers and executives; societies such as ASTRO, AAPM, ACR, and ACRO, SROA, and regulators. As a first step towards implementation, the Task Group recommends that representative personnel (radiation oncologist, physicist, therapist, etc.) undergo training and orientation either from their own risk-management department or at one of the sponsored workshops, e.g., modeled after the aforementioned 2013 AAPM Summer School. As indicated in our second recommendation, working through FMEA and FTA analyses of a small-scale, limited clinical process is the next logical step. The AAPM also recognizes that large-scale implementation requires the AAPM and other organizations to successfully act on the recommendations presented in Sec. 8.B.

Testing of effectiveness of the QM program can be accomplished in various ways. One way is for a clinic to form an FMEA committee and have the committee review information on failures in their own institution to estimate values for O and D. After putting in place some of the QM initiatives suggested by their FMEA/FTA, the FMEA committee analyzes the data in their incident reporting database for a certain length of time (for example, for a year) for observable changes or occurrences of new incidents. Are there observable changes? Results of such an analysis will yield valuable information about the effectiveness of the implemented QM program.

The above recommendations would be an ultimate application of the recommendations presented in this report. Practically, there will be two distinct considerations—one for the existing and established clinical procedures and a second one for the new technologies and associated clinical procedures that are introduced into the clinic. We expect this report to provide the foundation with respect to the approach and definitions. Analysis of existing clinical practices may identify safety gaps and inefficiencies in current resource or effort allocations. For new technologies, the initial FMEA and QM program will have to be based on limited experience and will probably undergo more frequent revisions and be periodically updated. But all processes can benefit from systematic analysis and redesign and eventually, all clinical procedures should go through an FMEA to optimize the design of the associated QM procedures.

IMRT is high-risk, high severity, and resource-intensive and the example in Sec. 9 of this report provides a valuable learning tool and an opportunity to decrease the initial in-house effort. An initial step for small clinics could be to adapt the TG-100 FMEA and FTA for IMRT to the local clinical process. The basic premise behind the recommendations of this task group is that FMEA, and a subsequently developed QM program will allow better utilization of clinical resources, thus rewarding the initial time investment. This is an essential point to note when approaching individual clinical groups and organizing the analysis process. Other procedures appropriate for early application of the risk-based methods will vary by individual institutions, but eventual analyses of all procedures and clinical areas are a desirable goal.

  • It is recommended that the scheduling priority for risk-based effort should be given to high-risk procedures, high severity procedures, new procedures, and those that are resource intensive.

Modern radiation oncology practices are dynamic environments where upgrading of current technologies or installation of new technologies is a continuous process. The only way to maintain a highly effective and efficient QM program is through continuous process analysis, redesign, and resource allocation. Regardless of the approach taken, maintaining and modifying the QM process and resource allocation is an enormous effort. FMEA is an approach that uses logic rather than brute force. Ultimately, individual institutions will have to determine the frequency of FMEA reanalysis and process readjustments for particular clinical processes, in an ongoing effort.

  • It is recommended that the risk-based methodology be adopted as an ongoing activity aimed toward continuous process improvement.

Both the complexities of the QM program and the available resources for risk-based QM will vary with the individual institution’s clinical activity, methods, expertise, and size. This relationship is not linear since the basic analysis has to be performed regardless of an institution’s size, so there may be hesitation to embark on development of a risk-based approach in smaller clinics with fewer staff. At least one aspect of the problem is easier for a small clinic, since individual staff members know more about the overall process. However, smaller clinics can potentially realize the greatest benefit from the improved allocation of their resources that can result from understanding their clinical process limitations.

Some hospitals have industrial engineers and safety experts on staff in the quality improvement department, and these can also be a great resource when first undertaking an analysis of an individual department or process. Qualified outside consultants can provide valuable guidance and insight into this process, as well as independence from the current situation. These consultants typically do not have an understanding of radiation oncology clinical processes but they have knowledge and expertise in system design and process analysis.

  • It is recommended that qualified outside resources be used whenever available for development of risk-based QM.

8.B. To AAPM and other organizations

  • It is recommended that future AAPM task groups dealing with QM integrate as appropriate the risk-based techniques. This could include risk-based analyses of important clinical processes as the basis of their generic or clinic-specific QM recommendations regarding radiation therapy procedures and technologies.

Such analyses will be valuable tools in guiding clinical practitioners toward efficient and effective adoption of new technology. The graphical and tabular presentation formats used within the FMEA process lend themselves to effective communication methods for organization of procedures and technological considerations, and will help facilitate understanding and earlier adoption of the relevant QM recommendations. The task group recommends that:

  • The AAPM establish a website with model process maps, FMEAs, FTAs, and the resultant quality management program for various procedures as those analyses are developed. The AAPM should also develop web based training tools to train the medical physics community on an ongoing basis on use of the newly developed process maps, FMEAs, FTAs, and the resultant quality management program for various procedures.

  • The AAPM should establish a task group which will draft guidelines for selecting RPN value thresholds.

To assist the community in adoption of these techniques, the task group recommends that:

  • The AAPM establish a working group to help guide the community during the transition to risk-based QM.

  • The AAPM should reach out to our sister societies to establish joint working groups to coordinate efforts in familiarizing the community with risk-based QM.

  • The AAPM provide speakers knowledgeable and experienced in the risk analysis techniques to chapter meetings, at the annual meeting of the Association and, where appropriate, to meetings of our sister societies.

  • The AAPM should generate a document for regulators giving guidance for evaluating quality management programs in radiotherapy facilities. This document should be written by a panel including members of TG 100 and the Conference of Radiation Control Program Directors (CRCPD).

  • The AAPM should give in-depth educational presentations on the new methodology for regulators at meetings of the CRCPD and of the Organization of Agreement States.

  • The AAPM should discuss with the American Board of Radiology how patient safety and quality in medicine could be incorporated into the Maintenance of Certificate program.

Additionally,

  • The AAPM should establish a repository on its website for sample quality management programs that regulators could use to become familiar with what such programs would look like.

8.C. Future research and development

The experience of TG 100 in applying FMEA and FTA to a generic model of the IMRT process flow has highlighted the need for additional scientific research, engineering innovation, clinical studies, and additional guidance by advisory organizations such as the AAPM and ASTRO. Areas requiring further investigation and development are highlighted below.

8.C.1. Assessment of FMEA/FTA generality and optimal implementation in individual clinics

A major issue for the practical implementation of the TG-100 recommendations is the extent to which an individual clinic can benefit from the specific process tree, FMEA, and fault-tree analyses reported in this document without having to formulate process trees and downstream analyses for their specific clinical program and applications. The TG-100 results can be helpful in shaping the general emphasis of an institution-specific QM program. First, the TG-100 analysis provides a concrete demonstration of how to supplement the TG-40 and TG-142 device-centered QA approach with a more comprehensive and process-centered approach that considers the interactions between the network of devices, staff, and processes that are required to perform radiation therapy.80 The TG-100 analysis also provides specific guidance as to where in the radiation therapy planning and delivery processes the highest risk events (in terms of potential for high severity and undetected scenarios to be propagated through treatment delivery) are located. Many of these high risk events involve erroneous specification of process inputs that are essential for driving the downstream planning process. These include physician-related failures such as selection of the wrong imaging study for delineating anatomy, incorrect image interpretation, grossly erroneous CTV delineation, and erroneous treatment directives, and also more directly physics related failures such as poor commissioning of the planning system or equipment, incorrect use of Boolean structures, incorrect interpretation of previous treatment doses. On the other hand, application of an exact duplication of TG-100 risk analysis at the institutional level should be avoided or made with much caution because the specific TG-100 prioritization of risk scenarios may not apply to that clinic. The risk of occurrence (O) of “Transfer images and other DICOM Data,” which depends heavily on the interface between imaging and planning software and processes such as software that requires user selection of files and destinations and has few automated consistency and completeness checks, can result in different kinds and rates of error than more automated software interfaces. In using TG-100 risk scenario prioritizations, the reader should also bear in mind the important limitation that the input for populating process trees and the FMEA analysis was provided primarily by medical physicists (the authors of this report) resulting in a reasonable FMEA. Had equivalent input from radiation oncologists, dosimetrists, therapists, and nurses been included, additional error pathways and different risk evaluations may have been identified. Ford et al.,9 who recently reported on one of the first radiation therapy FMEAs concluded that involving the entire radiation therapy delivery team in the time consuming analysis process yielded many benefits above and beyond the FMEA, including improved team cohesiveness and safety consciousness, established open lines of communication, a shared awareness of system weaknesses and strengths, and numerous suggestions for improving process flow. As is often the case with commissioning and acceptance testing, benefit derives from taking the journey together rather than arriving at the destination from different directions.

Further research is needed to evaluate what combination of customized, institution-specific analyses and applications of generic risk analyses provides the most cost-effective approach to engineering safe and robust radiation therapy processes. It is recommended that:

  • The AAPM, in collaboration with other organizations, organizes and funds a series of process-design demonstrations, each of which involves leading a selected clinical practice, such as SBRT, through the processes of risk assessment and QM system design under the guidance of project trainers. By identifying differences and similarities from the various clinical practices, the appropriate balance between generic and customized analysis could be identified.

8.C.2. Sensitivity, error propagation, and process control studies

The major focus of TG 100 is improving patient safety and the quality of treatments, with concentration on the causes of failure modes and the detection and mitigation of failure modes, which, if propagated through the clinical process, could result in the inappropriate delivery of a therapeutic dose of radiation that could cause harm to the patient. With a few exceptions,81–83 this approach differs significantly from previous AAPM task group reports that focus mostly on QA tests of devices and planning systems to ensure they achieve and maintain acceptable accuracy in the planning and administration of radiation therapy. From the TG-100 perspective, device-centered QA protocols are essential measures for preventing random device failures and/or systematic device misunderstandings from propagating through the system. FMEA and FTA techniques are applicable to dose delivery errors that have the capability of compromising patient outcomes on a statistical basis. Some of the results presented in Sec. 9 provide a simplified model for rationally assigning device test frequencies or action thresholds, although this model requires much new data and analysis and a better understanding of how device performance influences dose delivery accuracy. The detailed understanding of how to utilize this information may be improved by use of evaluation tools like “confidence-weighted dose distributions”84 and equivalent uniform dose (EUD) as a surrogate for assessing sensitivity of clinical outcomes to setup and device performance uncertainties.85,86 Additional experience applying this approach to a broader array of device performance endpoints and clinical cases is needed.

Another dimension of the problem is determining action levels and test frequencies when the various device parameters to be controlled exhibit both random fluctuations and underlying time trends or systematic problems. Examples that have been studied include IMRT plan verification by means of isocenter dose measurements in a hybrid phantom87 and daily Linac output measurements.88 The goal of QM test development is to devise a protocol that controls the target parameter within the limits specified by the appropriate sensitivity study with minimum effort, e.g., repeated output measurements and interventions such as changing the Linac monitor chamber sensitivity. Statistical process control techniques89,90 can be helpful in identifying underlying trends (systematic offsets) in settings where the QA measurements have random fluctuations comparable to the desired clinical performance level so that action levels for intervening in the process can be set rationally. More conventional statistical modeling approaches89 can be used to estimate both QA test sampling intervals and action levels needed to reduce the probability of a device failure, e.g., reduce to an acceptable level the probability that systematic drift in calibration would result in a dose delivery error exceeding some predetermined value. An even more difficult challenge is to apply these kinds of process control techniques to nondevice procedural problems and behavior.

More research is needed in these areas, including collection of more data documenting the statistical profile of device performance characteristics; more systematic sensitivity studies; and the development of standardized approaches to defining device performance tolerances, action levels, and sampling frequencies.

8.C.3. Observational studies and risk analysis validation

In contrast to many industrial applications of FMEA and FTA, a major limitation of the efforts of TG 100 and others91,92 to apply risk analysis techniques to radiation therapy processes is the lack of measured data on occurrence and detection probabilities. All of these studies are forced to rely upon expert consensus opinion to subjectively estimate the required probability data. While several studies report overall error rates in radiation therapy93,94 using a variety of data collection methodologies and error taxonomies, few studies address error and detection rates of common component subtasks of the radiotherapy process. Barthelemy-Brichant95 reported an error rate of 0.46% in transcribing field setup parameters from paper records into a treatment unit’s computer system. Fraass et al.96 reported error rates for many of the components involved in the treatment delivery process both for manual and computer-controlled delivery methods. Studies addressing error rates and underlying causes of common planning and delivery subtasks would be of great value in reducing the subjectivity characteristic of currently available radiotherapy risk analyses.

  • TG 100 recognizes that designers and manufacturers of treatment planning systems, treatment delivery systems, and other devices used in radiation therapy perform extensive pre-release risk analysis of their product with regard to its robustness and mechanical, electrical and dosimetric reliability. It is further recommended that they undertake a similar approach to testing and improving the clinical usability of their products, perhaps in collaboration with beta test sites to determine the error rates and underlying causes of failure modes in common planning and delivery subtasks and make these studies available to the radiation oncology community. Where appropriate, the manufacturers might want to use the TG-100 definitions in performing their FMEA/FTA analysis.

  • It is also recommended that the radiation oncology community gather data for occurrences and detectability for various clinical processes in a systematic approach so that models can be developed for them.

Eventually, validation of the benefit of using a risk-based QM approach should be performed at the local clinic. Probabilistic risk analysis is one method that can be used to semiquantitatively validate risk analyses based on subjectively estimated component error rates. An example of this type of effort is the study of Ekaette et al.,97 who developed a fault-tree analysis of their clinic’s radiation therapy delivery process, populated the fault tree with probabilities solicited from expert reviewers, and compared the overall rate of treatment delivery errors predicted by the probabilistic fault tree analysis (0.4%) with the observed error rate (0.1%–0.7%).

8.C.4. Incident reporting and taxonomic analyses

As noted above, there is little hard observational data available for populating FMEAs or FTAs with occurrence and detection probabilities. Most radiotherapy risk analyses are prospective models of planning and delivery based on the experience, expert knowledge, and expectations of the treatment team members who participate in the analyses. The major connections between prospectively constructed risk analyses and empirical reality are observed error rates, near events, and reports of large/catastrophic incidents. Medical error taxonomies, of which two have been developed specifically for radiation therapy,68,98 are intended primarily to support root cause analysis. However, with the advent of prospective approaches to quality and safety as discussed in this document, there is an opportunity to explore the possibility of informing FMEA and FTA using actual clinical data.

  • During the writing of the TG-100 report the Work Group on Prevention of Errors in Radiation Oncology of the AAPM published a document50 entitled “Consensus recommendations for incident learning database structures in radiation oncology.” The group has provided consensus recommendations in five key areas: definitions, process maps, severity scales, causality taxonomy, and data elements. For consistency, the terminology and data elements comprising these recommendations should be examined for applicability to prospective quality management strategies.

  • The experience of the airline industry99 demonstrates the value of comprehensive adverse-event reporting and root-cause analysis as tools for improving system and process safety. However, the utility of these tools extends beyond just retrospective analysis. The European ROSIS system,45 the ASTRO-AAPM initiated RO-ILS systems,55 and the Center for the Assessment of Radiological Sciences’ Radiotherapy Incident Reporting and Analysis System56 are all examples of databases in radiotherapy that can be used to inform an FMEA.

9. EXAMPLE APPLICATION OF TG-100 METHODOLOGY TO IMRT

9.A. Introduction

To illustrate the application of the risk analysis methodologies described in Secs. 48 and to demonstrate its value to clinical physicists, TG 100 performed the design, process mapping, FMEA, and FTA of a generic, but clinically realistic IMRT process. To get the most out of this exercise, it is necessary to first read at least the Preface to the report and preferably Secs. 46.

Section 9.B presents the methodology and results of the FMEA and FTA analysis. Section 9.C and Appendixes C1–C3 and E–G set out an example QM program derived by consensus based on these results, Sec. 9.E and the Appendix G summarize the resulting quality management recommendations and summary and conclusions are given in Sec. 9.F.141

The QM program consists of recommendations that encompass the planning and delivery subprocesses. These include clinical process changes, documentation, and training requirements and culture changes as well as traditional device- and process-oriented QA and QC checks. The recommendations in this report are not¯ to be viewed as prescriptive practice guidelines or universally applicable recommendations; this document cannot be used like TG-40,2 TG-142 (Ref. 1) and similar prescriptive guidance documents. These recommendations, and the risk analyses upon which they are based, are first and foremost pedagogic devices intended to illustrate to the reader how to develop risk analyses for their own clinical processes and how to use the results for designing and formulating their own QM and decision-making processes for IMRT and other advanced-technology treatment procedures. The operational recommendations of this report may serve as a starting point for readers who would like to adapt the TG-100 IMRT analysis to their own IMRT clinical process without performing their own clinic-specific FMEA and FTA from scratch. TG 100 emphasizes that the operational recommendations presented in Secs.9.C and 9.E and Appendix G (Ref. 141) are based on the prioritization of risk of a generic IMRT process that represents the consensus of ten senior physicists and one physician, and is limited to the system vulnerabilities identified by this group. The TG members sought input from other members of the IMRT team from their respective clinics.

One limitation of the TG-100 analysis is that it is a physics-based task group which included only one physician and no dosimetrists, therapists, nurses, or administrative support personnel. The TG has tried to include enough examples of the methodology so that more inclusive groups can develop QM based on FMEA and FTA for their department’s specific processes and methods, but such groups should strive for representative input from all involved institutional personnel.

It is notable that the analytical FMEA approach led TG 100 to propose many recommendations that are consistent with the recommendations and checklists of the ASTRO white paper.72 As well, several FMEAs for large-scale radiation therapy processes have been published. Among these, two are single-institution analyses,9,16 one is for a hospital network,15 one for electron beam IORT.100 In common with the TG-100 analysis, these emphasize that FMEA is a valuable tool although the analysis requires dedication and a multidisciplinary team approach. The TG-100 report also proposes that FMEA will be helpful in identifying high-risk features as new technology is introduced and several FMEAs dealing with specific radiation therapy equipment have been published.11,101,102

9.B. TG-100 risk analysis of a generic IMRT clinical process

In this section, the risk analysis of a generic IMRT process is described. As previously explained, this consists of (1) mapping the process (Sec. 9.B.1), failure modes and effects analysis (Sec. 9.B.2), and (2) fault tree analysis (Sec. 9.B.3).

9.B.1. IMRT process mapping

Because IMRT as performed at each TG member’s institution followed a unique pattern especially with respect to the order in which steps were performed, specific equipment used and staff responsibilities assigned, a specific example process (based loosely on one institution’s process) was selected. This choice is not an endorsement of that particular process although interestingly, the TG members ended up agreeing on the overall QM guidelines that emerged. The TG agreed that the twelve subprocesses listed in Table IX and shown in Fig. 5 are the main branches of the IMRT process tree that fall within the purview of therapeutic medical physicists.

TABLE IX.

Identified steps and failure modes for example FMEA of IMRT.

Process number Process description No. of steps in process No. of failure modes
1 Patient database information entered 1 3
2 Immobilization and positioning 4 7
3 CT simulation 10 14
4 Other pretreatment imaging 6 7
5 Transfer images and other DICOM data 3 8
6 Initial treatment planning directive (from MD) 9 9
7 RTP anatomy contouring 15 31
8 Treatment planning 14 53
9 Plan approval 2 11
10 Plan preparation 11 30
11 Initial treatment (Day 1) 7 20
12 Subsequent treatments (Day N) 9 23
FIG. 5.

FIG. 5.

Process map for IMRT in the absence of any quality management. The black arrows show the normal flow of the process, proceeding from left to right on the largest scale and from outward to inward within a given step. The red numbers indicate (hazard ranking) the most hazardous 20%–25% of the steps as indicated by high risk priority number values. For example, a number of 8 next to a step indicates that that step is the 8th most hazardous step within the 20% most risky categories. A step with several numbers indicates the ranking of that step within the top 20% most risky steps for different failure modes. Green text denotes failure modes with S ≥ 8, regardless of whether they were in top 20% most risky categories. The colored arrows show the flow of information or actual physical material between one subprocess and another. Specifically, the purple arrows show how immobilization and positioning impact on steps further downstream; the light blue arrows show the downstream flow of anatomic information, the dark green the transfer of initial images. Green circles represent a congregation of high severity steps. Red circles are drawn around those steps with a high concentration of identified hazardous steps. A red circle drawn around a green circle indicates a congregation of steps that are both hazardous and severe. QM measures in the earlier step would prevent errors from entering the later step.

Other branches, including “imaging and diagnosis” and “consultation and decision to treat” are dominated by diagnostic radiology staff, physicians, or others and were considered to be outside the scope of the task group. Clinics undertaking FMEA for IMRT are encouraged to examine their own practices, although the TG-100 example may be general enough to include the workflow in many clinics.

Because an important goal of the FMEA is to develop the most effective QM program without assuming the use of customary QM procedures, several familiar subprocesses that are purely quality management steps were omitted from the present FMEA. Omitted steps include pretreatment chart checks, routine Linac and IMRT QA, physician review, and weekly chart checks, since including those steps would bias the results. The expectation is that those QA steps that were truly necessary would find their way back into the QM program as a result of the FMEA/FTA.

9.B.2. IMRT failure modes and effects analysis

To create the FMEA, the TG reached a consensus on the steps within each subprocess and identified as many failure modes for each step as they could imagine. Based on its collective experience, the TG listed possible causes for each failure mode and described clinical situations where they felt the failure could occur. During the later analysis, additional failure modes were discovered that were not initially included in the FMEA. This is a common experience, and the TG recommends that failure modes discovered later should simply be added to the analysis and/or addressed at that time. The FMEA serves as a tool toward improved safety and quality and is not an end in itself.

A total of 216 FMs were eventually included in the analysis. The distribution of these FMs between the different boughs of the process tree (Fig. 5) is presented in Table IX. The entire FMEA analysis is shown in Appendix C1 listed in order of process, Appendix C2 listed in order of decreasing average RPN, and Appendix C3 listed in order of decreasing severity score (S, as defined in Sec. 5.B).141 The previously described consensus nomenclature for severity (Table I) and scales for occurrence, severity, and lack of detectability (Table II) were used in the analysis. Details of the creation of the example FMEA are described in Secs. 9.B.2.a9.B.2.c.

9.B.2.a. Assignment of O, S, and D values.

A spreadsheet (Appendix C1)141 was created listing each process step, each step’s FMs and the potential causes of failure associated with each FM. O, S, and D values were then assigned for the combination of each FM and its corresponding causes. Initially, the TG worked through the FMs as a group, attempting to determine O, S, and D values by consensus. Given the diverse and geographically dispersed membership, this was inefficient. Thus, the TG decided to work through the spreadsheet independently, and then evaluate the consistency with which O, S, D, and RPN (O × S × D) values were determined. This was accomplished by nine members. Each supplied an individual estimate of O, S, and D, for each combination of FM and cause, based on their individual experiences. The entire group discussed the evaluations and then pooled them as described in Sec. 9.B.2.b. A similar process of individual evaluation followed by group consensus is described in relation to departmental FMEAs (Refs. 9, 15, and 16) where geographic separation was not an issue. In general, the group average O, S, and D assignments should not be applied without careful consideration of local conditions.

As previously described, the FMEA was performed assuming no deliberate QA or QC measures, such as those recommended by TG-40 for the entire radiation therapy process or TG-142 for Linac QA. Thus estimates of O and D were based entirely on checks that are inherent in routine clinical processes downstream. Despite the lack of specific QA and QC checks, there are opportunities to detect failures such as faulty immobilization, which causes problems with patient set-up at “Day 1” or “Day N” treatment, leading to a medium value for lack of detectability D. On the other hand, without conventional Linac QA, incorrect dose calibration would be very difficult to detect during an individual patient’s treatment, leading to a high D value for this FM. As will be seen in Sec. 9.C, Appendix G,141 and the checklists (Tables IV–VIII) many conventional QM steps find their way back into the process. Of note, although TG members assigned O and D as if no QA were in place (allowing failures to be detected only through normal procedural steps further downstream), the evaluators’ individual experiences and biases undoubtedly influenced their O and D values in the hypothetical absence of traditional QA.

9.B.2.b. Method of analysis.

Several methods of analysis of the nine sets of O, S, D, and RPN values for the individual FMEA results were performed in an attempt to identify the highest risk steps in the process, with the intention of concentrating analysis and quality management program work in these areas. While several approaches to the consensus determination of the most and least hazardous steps can be envisaged, the TG chose the following two methods.

First, the median, average, and standard deviation of the O, S, D, and RPN values were calculated for each step. RPN values assigned to the 216 failure modes by individual evaluators ranged from 2 to 720, and the median RPN values ranged from 8 to 441. In the first method, the median RPNs for all steps were ordered and thresholds for the highest 10% and 20% (HM10 and HM20) and lowest 10% and 20% (LM10 and LM20) median RPNs were determined. A process step was included in the 20% (10%) most hazardous group if at least five evaluators assigned it an RPN above HM20 (HM10), and a similar analysis was used for the lowest priority steps. Analysis of HM10, HM20, LM10, and LM20 showed good interevaluator agreement that these FMs were highly or minimally hazardous, even though the quantitative risk estimates (RPNs) differed.

A second method identified the most and least hazardous steps according to the highest (or lowest) average values of RPN. Average RPN values ranged from 19 to 388. Process steps with the highest ranked 20% FMs were marked on the process tree (Fig. 5) with their ranking numbers in red (see Fig. 5). This visually highlights particularly hazardous branches and boughs of the process map. Steps which occur before or after the highest ranked 20% failure modes were also shown in Fig. 5 to show where the high-risk steps lie in the overall process. Process steps with a ranking close to the highest ranked 20% failure modes were also marked in Fig. 5 because many steps at the 20 percentile level had almost equivalent RPN scores and addressing them was considered of equal importance. Additionally, the steps where failure was judged to result in a high severity (average S ≥ 8) were given special attention even if their overall RPN was not high. The rationale is that prevention of these failures should have high priority without regard to the TG members’ estimates of their likelihood of occurrence or detection. These steps were marked on the process tree in green.

To gain further insight, each evaluator’s O, S, D, and RPN values were plotted individually for selected high and low risk steps and the correlation between the scores assigned by two different evaluators and between an evaluator’s scores and the median values were examined. This analysis illustrated that individual evaluators were in qualitative agreement as to the most and least hazardous steps, despite frequent quantitative differences in their individual RPN values (see Appendix D for further discussion).141 Such statistical methods would not be necessary in a single-institution FMEA of its own process, although there would likely be some individual scoring differences requiring averaging, discussion, or negotiation (FMEA).9,15,16 Ford et al.9 note that the rank order of the failures is more important than the absolute scale of the RPN values, so it would be of interest to know if risk-rankings of single-institution FMEAs of the IMRT process are very different from those of TG 100 and, if so, to understand why.

The next step in the process was to use the entire FMEA together with FTA to develop a risk-based QM program. In clinical use, the resulting program would be adopted for a trial period and then re-evaluated using the department’s error-reporting mechanisms. For some steps, a successful program would lead to a decrease in the RPN values of previously high-ranked failure modes, due to reduced O or D values. Persistent high-risk or high S failure modes and newly realized failure modes would prompt new QM efforts.

9.B.2.c. Results.

Table X shows the ten most hazardous steps evaluated according to the highest average RPN values.

TABLE X.

The ten highest average RPN steps and the corresponding potential failure modes, potential causes of failure, and potential effects of failure from the TG100 FMEA.

Rank (process tree step#) Subprocess#_description Step description Potential failure modes Potential causes of failure Potential effects of failure Avg. O Avg. S Avg. D Avg. RPN
1 (#31) 4—Other pretreatment imaging for CTV localization 6—Images correctly interpreted (e.g., windowing for FDG PET) Incorrect interpretation of tumor or normal tissue Inadequate training (user not familiar with modality), lack of communication (inter-disciplinary) Wrong volume 6.5 7.4 8.0 388
2 (#58) 7—RTP anatomy Delineate GTV/CTV (MD) and other structures for planning and optimization 1—>3*sigma error contouring errors: Wrong organ, wrong site, wrong expansions Lack of standardized procedures, hardware failure (defective materials/tools/equipment), inadequate design specification, inadequate programming, human failure (inadequate assessment of operational capabilities), human failure (inattention), human failure (failure to review work), lack of staff (rushed process, lack of time, fatigue) Very wrong dose distributions, very wrong volumes 5.3 8.4 7.9 366
3 (#204) 12—Day N treatment Treatment delivered LINAC hardware failures/wrong dose per MU; MLC leaf motions inaccurate, flatness/symmetry, energy—all the things that standard physical QA is meant to prevent Poor design (hardware), inadequate maintenance, software failure, lack of standardized procedures (weak physics QA process), human failure (incorrectly used procedure/practice), standard Linac performance QM failure (not further considered here), inadequate training Wrong dose, wrong dose distribution, wrong location, wrong volume 5.4 8.2 7.2 354
4 (#48) 6—Initial treatment planning directive (from MD) Retreatment, previous treatment, brachy etc. Wrong summary of other treatments. Other treatments not documented Lack of staff (rushed process, lack of time, fatigue), human failure (inattention), lack of communication, human failure (reconstructing previous treatment), human failure (wrong info obtained), information not available Wrong dose 5.3 8.6 7.3 333
5 (#59) 7—RTP anatomy Delineate GTV/CTV (MD) and other structures for planning and optimization 2—Excessive delineation errors resulting in <3*sigma segmentation errors Lack of standardized procedures, availability of defective materials/tools/equipment, human failure (materials/tools/equipment used incorrectly), human failure (inadequate assessment of materials/tools/equipment for task), inadequate design specification, inadequate programming, inadequate training, human failure (inadequate assessment of operational capabilities), human failure (inattention), human failure (failure to review work), lack of staff (rushed process, lack of time, fatigue) Wrong dose distribution, wrong volumes 5.9 6.6 8.0 326
6 (#65) 7—RTP anatomy PTV construction 3—Margin width protocol for PTV construction is inconsistent with actual distribution of patient setup errors Lack of standardized procedures, lack of communication, inadequate training, human failure (inattention), human failure (failure to review work), lack of staff (rushed process, lack of time, fatigue) Wrong dose distribution, wrong volumes or suboptimal plan 7.3 5.4 7.9 316
7 (#136) 9—Plan approval 1—Plan OK to go to treatment 3—Bad plan approved Lack of communication, human failure (inattention), lack of standardized procedures, human failure (incorrectly used procedure/practice), inadequate training Very wrong dose, very wrong dose distribution, very wrong volume 4.9 8.0 7.9 313
8 (#200) 12—Day N treatment Set treatment parameters 2 Special motion management methods (e.g., gating, breath-hold) not applied or incorrectly applied Poor design (software), poor design (hardware), inadequate training human failure (operator not observing counterintuitive patterns on screen) Wrong dose, wrong dose distribution, wrong location, wrong volume 6.2 6.7 7.11 310
9 (#46) 6—Initial treatment planning directive (from MD) Specify special instructions, viz. pacemaker, allergies, voiding, bowel prep, etc. Special instructions not given wrong special instruction (e.g., allergy, pacemaker) Lack of standardized procedures (documentation), lack of staff (rushed process, lack of time, fatigue), human failure (inattention), lack of communication, human failure (wrong or inadequate information obtained) Non-radiation related injury 5.3 8.8 6.5 306
10 (#126) 8—Treatment planning 13—Evaluate plan (DVH, isodose, dose tables, etc.) 1—Inadequate evaluation Human failure (not enough time/effort spent), inadequate training poor evaluation strategy, human failure (incorrect final prescription) Wrong dose, wrong dose distribution 5.6 7.0 7.1 303

Standard-of-practice physics QA (independent checks of treatment plans and both physical and electronic charts, Linac QA, rigorous commissioning of treatment planning systems, etc.) would substantially lower the RPN value for only a few of these failure modes, primarily due to reduction in lack of detectability (D). Human factors, not device performance failures, were the most commonly cited causes of the highest risk failures. This is consistent with observations in other studies of patient safety in radiation therapy.91,103 Factors noted included inadequate training, intra and interdepartmental miscommunication, lack of consistent procedural guidelines, and loss of attention by people performing a task. A somewhat less frequently cited cause was time pressure. Physicians are deeply involved in seven of these highly hazardous steps, emphasizing the necessity of a fully interdisciplinary approach to radiation therapy QM.

The distribution over the process tree of the failure modes in the 20% most hazardous categories shown in Fig. 5 suggests the following observations:

  • 1.

    Several gross tumor volume (GTV) and clinical target volume (CTV) delineation steps, which can lead to a geographic miss, emerged as high-risk failure modes. Target structures are usually defined by a physician and QM measures, such as physician peer-review, are required to keep such failures from propagating through the process to affect patient treatment. Planning target volume (PTV) delineation was judged less hazardous, perhaps because the PTV is derived from the GTV and CTV and because the PTV is in fact a QM measure aimed at reducing the impact of setup and localization errors.

  • 2.

    The “initial treatment planning directive” is the physician’s instructions to the planner regarding the planning goals and constraints. Most initial treatment-planning directive steps carry high risk, high severity, or both. Although physicists can take limited QM measures on their own, such as establishing standardized procedures and ensuring adequate dosimetry staff training, physician peer review may be the most direct and effective QA measure for this subprocess.

  • 3.

    Treatment planning provides many opportunities for failure, especially since nearly all treatment planning failures become systematic treatment failures if not found. The FMEA supports the traditional concentration of physics QM efforts in the area of treatment planning and indicates the necessity of these efforts. Many of the riskiest steps are concerned with specifying anatomical regions of interest. Dose calculation, image transfer, and conversion of contoured regions of interest to 3D structures for plan optimization and evaluation were identified as hazardous steps. Conventional QM strategies, including planning-system commissioning and routine planning QA procedures, can reduce the risk of many of these FMs.

  • 4.

    Failures at many steps in the plan approval and plan preparation subprocesses may have serious consequences because of their high S scores (10 of 11 failures in plan approval and 13 of 30 failures in plan preparation have average S ≥ 8).

  • 5.

    Treatment delivery steps are critical. Most of the individual treatment delivery steps do not have the very highest RPNs, since there are many opportunities to detect treatment-related errors before they affect a complete treatment course, unlike treatment preparation errors which typically introduce systematic errors affecting the entire treatment course. However, treatment delivery does include a number of failures that are within the top 20% of hazardous categories. This is another check on the FMEA as it confirms that it is reasonable to subject treatment delivery processes to strong QM measures. Linac hardware failure in the absence of normal device QA practices was rated in the upper 10% of risk.

  • 6.

    The high RPNs associated with motion management may reflect the fact that respiratory and other motion management methods are relatively new and unfamiliar to many institutions, that routine planning and delivery processes have not been fully modified to address these issues, and that the sensitivity of overall dose-delivery accuracy to execution errors and uncertainties of motion-management has not yet been determined.

  • 7.

    Several high-risk failures in the treatment delivery steps are related to failure to act, wrong actions or actions carried out at the wrong times. As is often the case when something in a complex process happens incorrectly, the reaction to that problem can cause new and potentially worse errors downstream.

9.B.3. IMRT Fault Tree Analysis (FTA)

9.B.3.a. General features of the fault tree derived from the TG-100 FMEA.

After the FMEA analysis (above) was performed, a FTA as described in Sec. 5.C, was also performed, based on the failure modes identified earlier. FTA is a tool that allows one to visualize potential locations for effective and/or efficient QM measures, since the propagation of FMs through the process is more visually illustrated in the FTA than in the FMEA spreadsheets. Appendix E shows a complete fault tree for the entire IMRT process. Each failure mode is shown as a box with its associated RPN near its upper right corner; red RPNs indicate modes in the most hazardous 20%.141 Appendix F is a portion of the fault tree with the addition of QM actions that block propagation of a FM to patient treatment.141 The overall fault tree exhibits several interesting characteristics:

  • 1.

    Unlike most industrial FTAs, this fault tree is not very deep (i.e., no extensive branching into substeps). Its dimensions are unusual, being relatively tall (many separate failures) and shallow. Although this may be an artifact of the generic IMRT process used, similar features have been observed in FTAs of other medical processes.

  • 2.

    Another unusual feature here is that most FMs have a large number of inputs into that FM, seen as or gates in the FTA. Such a pattern implies a very high level of hazard since a failure in any of the unprotected inputs would produce an overall failure.

  • 3.

    The progenitor causes, the events on the far right of the fault tree that initiate each of the failure pathways, are mostly latent errors or conditions, i.e., persistent, organizational failures or deficiencies that increase the likelihood that staff members will make active errors, e.g., fail to execute process steps correctly. Finding and correcting latent conditions help reduce the probability of occurrence of entire classes of problems, since latent errors are more likely than active errors to cause failures along many diverse branches of the fault tree.

  • 4.

    A particular latent condition (e.g., lack of a particular procedure) found at one location in the process may not be the same progenitor condition at other locations, even though both are described as “lack of procedure” since the different procedures could be lacking at different process steps. Thus, fixing each particular lack of procedure by developing a written procedure specific to that activity would only have a local effect. However, the fact that the cause lack of procedure, occurs many times may imply a common latent condition—for example, departmental management does not sufficiently emphasize rationalization and formalization of clinical processes. A common finding in risk analysis is that procedures define a reference level or set of process outcome expectations that can be used to identify outcomes that vary from the norm.

  • 5.

    Table XI lists the most common progenitor causes for the failure modes graphically portrayed by the FTA.

TABLE XI.

Most common classifications for the possible causes for the failure shown in the IMRT fault tree analysis in Appendix E (Ref. 141).

Category Occasions
Human failures 230
Lack of standardized procedures 99
Inadequate training 97
Inadequate communication 67
Hardware/software failure 58
Hardware 9
Software 44
Hardware or software 5
Lack of staff 37
Inadequate design specifications 32
Inadequate commissioning 18
Use of defective materials/tool/equipment 12

The dominant category is human failure. In the FMEA, TG members suggested various forms and underlying causes of human failure; interestingly, poor employee performance was rarely cited as a major cause. Human errors have many causes: e.g., loss of attention, biased expectations, distractions due to multiple demands, bad judgment in the face of a deviation from the normal process, and fatigue or overwork. Human failure rates can be most efficiently reduced by “forcing functions.” A forcing function is defined as something that prevents a failure-causing behavior from continuing until the problem has been corrected; Linac interlocks are familiar examples104 which often prevent an error from being made in the first place. Unfortunately, forcing functions often require highly technical solutions and are often works-in-progress rather than immediate solutions. Close collaboration between clinicians and vendors is important in this regard. Human failure rates can also be reduced by strategies that address underlying causes by providing, for example, properly lighted and distraction-free environments, good ergometric design of computer and device graphic user interfaces (GUIs), and efficient and orderly flow of information. These strategies, along with good supervision and training in the establishment of a safety culture can reduce, but never completely eliminate, human failures. Published studies85,105–113 reporting radiation therapy computational and transcription task error rates suggest failure rates of the order of 0.5%–1%. This likely represents the best that can be achieved under optimal conditions. The brief sample event scenarios in column L of Appendixes C1–C3 are highly generic. Individual clinics are advised to examine the relevance of these scenarios to their own practices.141

The next two most common categories, lack of standardized procedures and inadequate training, along with the lack of communications and information problems, all reflect latent organizational flaws. These problems cannot be addressed efficiently by adding more QA or QC checks, but rather requires redesign or at least improved documentation of the current process. Establishing standard procedures and protocols, assuring personnel are trained appropriately (with exams for verification), and designing clearly understood lines of communication and information flow create an environment that reduces the likelihood of occurrence of many potential paths for failures. Standard procedures and protocols suggested by the TG-100 analyses are given in the example checklists of Tables IV–VIII. It is also very important for department managers to provide a work environment that is free of clutter, interruption, and distractions.

Most of the causes grouped under lack of staff result from administrative decisions. As an example scenario, if an experienced dosimetrist is not available to plan IMRT, the physicist often both generates the treatment plan for IMRT and performs the pretreatment plan check. This is a dangerous situation, since an error is more readily detected by an independent check than by the person who made the error in the first place. Inadequate staffing levels can also produce a rushed environment or fatigue, leading to user errors. In general, such problems cannot be addressed merely by process design, documentation initiatives, or by adding QA checks but can only be solved through administrative decisions that are informed by current staffing studies.114–116

The large number of potential failures attributed to hardware and software failures and to design failures illustrates how highly dependent radiation therapy quality is on robust and accurate equipment performance. Preventing device failures from propagating into events requires: (1) careful specification of device performance characteristics during purchase, including reliability and safety features; (2) comprehensive commissioning, in the context of the process to be used, to assure proper operation of equipment, (3) training of personnel on how to recognize and respond to machine failures, and (4) appropriate periodic equipment QA that monitors its operation.

Comprehensive commissioning identifies inadequacies in both equipment and procedures before beginning patient treatment. The commissioning not only checks the operation of equipment and provides the information necessary for its use but also establishes the limits of reliable operation for equipment and systems. Commissioning of procedures entails coordination between all involved personnel as tested by walking them through trial runs. Time spent during commissioning can save time and increase reliability during routine operation. Commissioning provides a detailed and real-world understanding of a device’s features, providing the basis for rationally integrating it into the department’s clinical practice. It is very important that hospital administrators and department leaders allow adequate time and personnel resources for commissioning tasks. This is an area where “haste makes waste.”

9.B.3.b. Simple example of FTA guidance in QM design.

As seen in the discussion above, many failure causes cannot be addressed through conventional, device-performance QA or physics chart checks, but rather require system redesign, administrative changes, or a broader type of commissioning. For example, the annotated fault tree for the high-risk radiation treatment planning (RTP) anatomy failure mode (Fig. 6 and Appendix F) indicates the general type of QM methodology required to mitigate the principal progenitor causes.141

FIG. 6.

FIG. 6.

(A) A portion of the fault tree for the step RTP anatomy failure involving the failure mode of >3 sigma contouring errors; this failure is in a red-edged box with its RPN (366) at its upper right corner. The black numbers are line numbers from the full FTA (Appendix E) (Ref. 141). (B) The fault tree shown in Fig. 6(A) with the inclusion of quality management.

Failure modes with green diagonal lines through them indicate potential failures in older IMRT planning or delivery systems. The red diagonal lines indicate causes best addressed by more complete training, establishing clear communication modalities (including forms and checklists), and establishing protocols, policies, procedures, and expected outcomes. Those with red arrows reflect causes that could be eliminated by providing appropriate resources for the facility (administrative decisions) and those with green arrows by comprehensive commissioning. Of the 156 causes that may give rise to an undetected RTP anatomy failure, 133 can be at least partially addressed by the measures described above. The remaining 23 causes, mostly user errors, may be addressed by peer review of the contoured structures at the end of the subprocess.

9.B.3.c. Suggested use of FTA.

While much of the analysis and many of the resultant QM measures discussed in Secs. 9.C and 9.D and Appendix G (Ref. 141) could be derived from the FMEA alone, the FTA graphically illustrates the propagation of errors from one process step to another, helping to identify what structural changes to make to the process and the optimal placement of QC and QA interventions.

To achieve maximum efficiency, it is desirable to consolidate proposed QM steps if possible. Searching the fault tree for multiple occurrences of a progenitor cause can provide economy in establishing QC. For example, the step “Dosimetrist/MD preplanning contour review” shown in Fig. 6(B) could be generalized to encompass review of the outputs of both the Initial treatment planning directive and “RTP anatomy contouring.” Positioning a check prior to planning will avoid wasted planning effort based on erroneous imaging datasets, incorrect contours, or unrealistic treatment goals. Another example is the progenitor cause of “defective equipment” which can be addressed by a department-wide preventive maintenance program that covers all clinical hardware and software. If a failure at a single step feeds to different downstream failures, preventing the common failure reduces the likelihood of the several resultant failures. One can also examine the fault tree and process map to look for junctions where QA activities could cover multiple potential failures. A third example is discussed in detail in Sec. 9.C, where QM measures that address the second ranked FM are seen to also mitigate lower ranked though still significant FMs; other examples are shown in Appendix G.141

9.C. Risk-informed design of IMRT QM programs

9.C.1. Discussion of top ranked failure modes

The next step in the overall plan to improve quality and safety is to use the FTA and FMEA risk and process-oriented information to design a QM program for the process being investigated. In this section, significant parts of a QM program for IMRT, developed on the basis of the risk analysis using the TG-100 FMEA and FTA, are described in detail below, following the procedures outlined in Sec. 6 of this report.

The TG addressed the 216 failure modes in descending order of RPN risk score in Appendix G, where each FM-specific subsection describes relevant QM intervention(s) along with a discussion indicating the reasons for the interventions.141 When a QM strategy for a given FM also addresses lower-risk FMs, we refer back to the higher risk FM subsection, making Appendix G less formidable than it may appear at first glance.141

However, to help readers understand the methodology of risk-based QM program design, we present example analyses for eight of the 216 FMs in Subsections 9.C.29.C.4 and 9.D.19.D.5. The goal of this section and of Appendix G is to illustrate the way a department might design QA and QC tasks to mitigate the various FMs.141 The individual interventions described below and in the Appendix are examples of what a clinical department might do but are not prescriptive recommendations that clinical departments should or must do. Even a department that wishes to adopt the TG-100 QM program should analyze its own technologies and clinical processes to customize the risk-based QM process to its own situation.

9.C.2. Failure mode #1

Rank RPN Step# Process Step
#1 388 31 4. Other pre-treatment imaging for CTV localization 6. Images correctly interpreted
FM: Incorrect interpretation of tumor or normal tissue

The highest ranking hazard involves incorrect interpretation of a pretreatment diagnostic imaging (e.g., PET, MR) study for defining the GTV, CTV, or a dose-limiting normal tissue. Progenitor causes listed in the FTA include inadequate training of the study reader or poor interdisciplinary communications. For example, suppose that the radiologist reports that a patient’s PET-FDG study reveals a positive para-aortic lymph node, but the radiation oncologist incorrectly identifies this lymph node with a region of high intensity signal caused by a benign inflammatory condition and therefore places the GTV adjacent to the wrong vertebral body.

Consideration of this failure mode, as with many top ranked failure modes, illustrates a very important fact: it is not possible to establish a complete QM program involving solely medical physicists. Effective QM requires a team approach with members of each specialty participating and making recommendations, particularly for potential failures relating to their expertise. Full application of the method described in this report requires the involvement of everyone who participates in the radiation oncology process. The reader should remember that the task group writing this report had only one physician serving, with the rest of the panel consisting of radiotherapy physicists and an industrial engineer.

This particular failure mode illustrates a number of general characteristics of physician-driven actions and decisions:

  • (i)

    They are often critical drivers of downstream planning activities, which, if performed inaccurately, have a high potential for causing systematic error;

  • (ii)

    physics, technical, and nursing support staff often lack expertise in physician-driven processes—in this case, image interpretation or even access to the imaging studies—and therefore are in no position to intercept or detect such errors; and

  • (iii)

    in the traditional physician-dominated control-and-command model, support staff had little institutional support or standing to challenge physician decisions. As more departments revamp their practices to emphasize a safety culture, this situation is likely to improve.117

If a medical physicist or dosimetrist recognizes an occurrence of this failure mode and the departmental culture allows, he or she should, of course, bring it to the attention of the radiation oncologist. However, the main remedy for this failure mode lies with the physician community.

There are at least three broad avenues for intercepting image interpretation errors.

  • a.

    Peer review. Directly addressing this failure mode requires physician-based checks somewhere in the process. The FTA demonstrates that a QA check of the target volumes defined by the prescribing physician, before significant planning effort takes place, is likely the most efficient way to prevent this failure mode (included in example checklist of Table V). Meaningful peer review of target delineation requires display of imaging studies on which target volume delineation is based, along with the treatment plan and the CT images from simulation. The considerable effort required in order to organize and implement such peer-review is warranted by the extremely high RPN value of this failure mode, since these errors are likely to be virtually undetectable otherwise.

  • b.

    Adequate physician training for interpretation of diagnostic imaging studies. Table XI identifies inadequate training as the third most likely progenitor cause of all failures identified. However, the person interpreting the images may think that she or he has the appropriate knowledge. Assuring training in clinical procedures is often problematic, so an institutional policy requiring experts in a given imaging modality to advise new staff on all reading of such images may be beneficial. Important components that can help reduce the risk of this failure mode include physician education through training courses offered by ASTRO and other professional or educational organizations, and intradepartmental peer review based on evaluation of actual cases.

  • c.

    Improved interdepartmental communication. A potential cause of this failure mode is failure of the radiology report or reading to address the radiation oncologist’s need for quantitative tumor localization (in addition to diagnosis and staging). By communicating these needs to the radiologist, the radiation oncologist can mitigate errors and enhance the value of these imaging procedures to the RT process. In addition, good communication with the radiologist is a low-cost and efficient avenue for the radiation oncologist to become educated in the interpretation of more specialized functional or molecular images.

9.C.3. Failure mode #2

Rank RPN Step# Process Step
#2 366 58 7. RTP anatomy Delineate GTV/CTV (MD) and other structures
FM: >3σ contouring error, wrong organ, site, or expansions

The second ranked failure mode is very large contouring errors (in excess of three times the expected interoperator delineation error) which will be used to illustrate the evaluation and analysis which can be used to create QM procedures based on review of the fault tree. Figure 6(A) is a section of the IMRT fault tree (from Appendix E)141 for the step “Delineate GTV/CTV (MD) and other structures” for which failures can lead to planning or optimization failure. This figure shows only the dominant intermediate and basic events that cause “>3σ error contouring errors: wrong organ, wrong site, wrong expansions” events.

As with the first-ranked FM, radiation oncologists are the only personnel with the specialized knowledge needed to define the contouring protocols for most radiotherapy targets and other critical structures, and should take the lead in developing QM for this step. However, as the FTA [Fig. 6(A)] shows, many failure modes contribute to this type of failure, which means there are a number of different approaches to avoiding or mitigating this error.

In Fig. 6(B), the QM steps at the most effective locations in the fault tree of Fig. 6(A) are indicated by lines or arrows (the key is in the upper-left of the figures). These QM steps are described below.

  • 1.
    Peer review. As in the rank 1 failure mode, a peer review of all structure delineations is the most effective method for intercepting such failures. There are several ways to implement such reviews:
    • a.
      At facilities with resident training programs, having the attending physician review and edit contours drawn by the resident physician focuses the attending physician’s attention on how well the contours match the “standard of care” definitions.
    • b.
      The larger radiation oncology community could provide assistance to radiation oncologists in small or solo practices through some form of internet-based peer-review system.
    • c.
      Peer-review systems have been developed for formal protocols used by clinical trial groups [e.g., radiation therapy oncology group (RTOG)], and ASTRO has developed educational programs, such as the special contouring sessions which are organized during ASTRO meetings. Departments should also encourage the training of dosimetrists and physicists involved in treatment planning, both through in-house efforts (senior dosimetrists training juniors, physicians training physics personnel) and participation in workshops given by professional organizations. This expands the pool of individuals equipped to detect and prevent large contouring errors.
    • d.
      The medical physicist and dosimetrist should make sure that radiation oncologists are trained in the correct use of contouring software, and may, in the future, assist in implementing new technologies such as automatic segmentation programs, that may reduce the probabilities for errors. These programs, while in their infancy, already show promise and will likely improve. While the use of such programs may reduce the likelihood of contouring errors, and improve the detection of errant contours, they may also open pathways for new failure modes.
  • However, given the large number of failure modes and potential causes of this failure, relying on a single check may leave the process at high risk unless upstream causes of failure are also addressed. Furthermore, peer review may not always be a cost-effective or sufficiently robust approach when contouring errors are relatively frequent. Therefore, the following QM steps are also recommended.

  • 2.

    Standardized procedures. The lack of uniform procedures and training can dramatically increase interobserver segmentation variability, i.e., delineation error, beyond the level inherent to the imaging modality. For example, an early study of prostate boundary delineation error on CT (Ref. 16) revealed extremely large physician-to-physician variability (10%–20% standard deviations in prostate volume). However, when consensus among observers is reached on fundamental issues such as “are we contouring just the prostate or margins for extracapsular extension?”; “How do we identify prostatic apex and other boundaries not visible on CT?”; etc., and when observers have an opportunity to be corrected on training cases, much smaller (2%–4%) variations are observed.118 The EORTC/RTOG guidelines for contouring electively treated lymph-node CTVs in head and neck cancer119 is an example of a published guideline that can be used as the basis of an institutional consensus-derived segmentation process and associated training. Written departmental guidance on segmenting anatomic and target structures should be developed as part of the site-specific protocol (example checklist of Table IV). Such guidelines can also be used as the basis for empowering physics and dosimetry staff to intercept large contouring errors.

  • 3.

    Elimination of hardware failure/inadequate design/inadequate programming. These potential causes are best detected and compensated for during commissioning of planning or other contouring software or, in the cases of a transient hardware/software failure, through periodic QA and preventive maintenance of the system. Adequate commissioning not only assures that the equipment operates as described in the manufacturer’s specifications but determines how the equipment functions over the range of expected use, particularly outside the normal and intended range. Commissioning must determine the limits of reliable operation and the types of errors that occur with misuse. Commissioning also provides an opportunity to compensate for software design deficiencies, through changes in the clinical process. For example, if the planning system’s manual segmentation software is so slow as to challenge physician patience and willingness to review work, using different software (e.g., CT-simulator virtual simulation software) to contour or assign certain segmentation tasks to dosimetrists might be appropriate.

  • 4.

    Prevention of human failures (inattention, incorrect operational assessment, failure to review own work). Minimizing the probability of human failures compromising a patient’s treatment (i.e., random execution errors that occur despite training and well defined procedures) often requires redundant QC or QA checks that operate on the inputs or outputs of the process by adding parallel activities. For example, the “Failure to review own work” could be ameliorated by independent review of the contours at input through use of an automated anatomic contouring program. In Fig. 6(B), an auto contouring program120 could be used to check the physician’s contour by flagging a large discrepancy as an “and” gate in parallel with the dosimetrist’s contour review or with MD peer review, thus intercepting downstream propagation of human (>3σ) contouring errors. Trying to prevent human failures from compromising clinical care, however, requires considerable resources, and human creativity often finds new ways to fail in ways which were unthought-of at the time the QC was put in place. Peer review of contours is optimal, but a knowledgeable dosimetrist or physicist can often check the consistency of contours and flag many types of potential problems (e.g., contour overlap, accuracy of normal structures). Physicists and dosimetrists should be encouraged to ask questions about structure sets that differ from those they have seen in similar cases.

  • 5.

    Avoiding rushed process/inadequate facilities. A rushed process may result from poor organization on the operator’s part or from managerial decisions leading to inadequate staffing or lead-time before treatment. Minimizing the possibility of such failures requires a commitment on the part of management and medical staff to provide adequate time and resources for the facility to achieve its mission. Given such commitment, all personnel have the responsibility to complete their tasks in a timely fashion.

Failures to utilize correct Boolean combinations of delineated structures (ranks #29, #46, #59, and #104 with RPN values of 230, 219, 205, and 168) are also handled by the FM rank #2 QM measures. FM #104 is related to software failure, but the others are all related to human errors where the wrong structures are combined, the Boolean combination is ambiguously or incorrectly defined, or the wrong Boolean operator(s) are used. Since these errors can happen in any kind of case, the best way to prevent these failures is to include the process of Boolean combination, including standardized structure creation, in the example checklist for standardized site-specific protocols for workup of patient prior to IMRT treatment planning (example checklist in Table IV). The QA check to intercept these errors can be incorporated into the dosimetrist/attending physician preplanning contour check [see Fig. 6(B)].

9.C.4. Failure mode #3

Rank RPN Step# Process Step
#3 354 209 12. Day N Tx Tx delivered
FM: LINAC hardware failures; wrong dose/MU; MLC leaf motions inaccurate, flatness/symmetry, energy, etc.

Radiological and geometric delivery errors associated with treatment machine failures comprise the third highest risk failure mode (remember that the FMEA assessed risk assuming no specific QA procedures were performed). Since any hardware delivery error is essentially undetectable in this scenario, a very high RPN number result, underscoring the importance of periodic QA to reduce the risk of machine hardware failures. The discussion below is a brief summary of the more complete text contained in Appendix G.141

Most current machine QA guidance [e.g., the recommendations of TG-142 (Ref. 1)] is loosely based on the goal that the total cumulative dose-distribution delivery uncertainty should not exceed 5% or 5 mm when all contributing geometric and dosimetric tolerances are summed in quadrature. However, most such recommendations are based on TG member consensus and not formal error propagation analyses or endpoint-specific rationales. Of note, reports (e.g., TG-40) do not always specify whether the stated uncertainty refers to 1 or 2 standard deviations (k = 1 or 2). Given the wide variety of techniques currently applied for patient treatment, a single set of QC tolerances and test frequencies may be neither necessary nor sufficient to protect the patient from “wrong dose” or “wrong location” errors or to evaluate risks appropriately. Two brief examples illustrate the issues:

  • (1)

    Suppose daily online image guided radiation therapy (IGRT) is used for all patients on a given machine. For this situation the need for traditional (within ± 2 mm) localization optical distance indicator (ODI), light field, and cross hairs may be decreased, and current TG-142 recommendations for QA for these parameters may be too strong, inefficiently using QA resources.

  • (2)

    On the other hand, current TG-142 recommendations that MLC leaf positioning error be assessed monthly may be too lax for treatments delivered in few fractions because MLC errors could be missed for this entire treatment course.

TG 100 envisions that ultimately QM for Linac failure modes be designed to minimize the risk of a Linac performance deficit resulting in a patient treatment course exceeding the allowed cumulative positional and dose-delivery tolerances. Here we describe an approach to the determination of test frequencies and tolerances for Linac QA, using dose output (Gy/MU, in reference geometry) as an example, followed by very brief comments on other parameters. The approach is discussed in more detail in Appendix G.141

9.C.4.a. Example method for determination of tolerances and frequencies for QA tests of Linac output.
  • 1.

    Define the QM goal. The overall dose-delivery or positional accuracy for the target must be consistent with the department’s vision of acceptable quality or the accepted standard of care. In the following example the goal is “no patient’s total dose-delivery uncertainty should exceed 5%” (consistent with TG-40 and TG-142).

  • 2.

    Determine the sensitivity of the QM goals to the performance parameter. In the case of dose/MU sensitivity, some dose errors are linearly related to the error in a given parameter, such as errors due to miscalibration of the dose/MU control. Such a relationship is said to have a sensitivity of 1. However, other errors require nontrivial sensitivity analyses (e.g., output constancy and linearity as functions of dose rate, gantry angle, and MU/segment). These parameters can affect total dose delivery accuracy but their dosimetric impact depends on the distribution of gantry angles, the functional relationship between output and gantry angle, MU/segment, and dose rates characteristic of typical plans.85,90

  • 3.

    Determine the maximum error in the Linac performance endpoint for which the machine remains operable. Linac interlocks, which prohibit operation when parameters go out of tolerance, are important. Recent failures of symmetry interlocks however, highlight the difficulty in relying on such systems. Most accelerator interlocks can be rendered useless if someone, such as a service person or physicist, adjust the baseline they use to determine operational limits. In this example we will consider two situations: a typical modern scenario where machine interlocks are triggered by dose output errors exceeding 5% and an extreme situation where transient and persistent dose output errors up to 40% are possible without triggering machine interlocks. Although the interlocks of some accelerator models do not allow such a large error, a 40% error has been reported following service when a pot was misadjusted and the interlocks reset to new values. Therefore, this large but not impossible value is chosen for demonstration purposes.

  • 4.
    Determine the monitoring frequency needed to achieve the uncertainty goal. In a treatment of N fractions, a patient can receive up to n fractions with dose output error per fraction of q%, without exceeding an A% dose accuracy limit if all the other treatments are perfect and
    nAN/q. (1)
    Figure 7 is a plot of the number of allowed fractions with output error vs the error in (n vs q) for 35, 10, and 5 fraction courses (N = 35, 10, and 5) for A = 5% and 1.6%.

    The choice of A = 5% is a loose tolerance, since dose output is not the only source of error in a treatment; the tighter tolerance of 1.6% is a more realistic goal which acknowledges other sources of dosimetric uncertainty, as discussed in more detail in Appendix G.141 Radiobiological effects were not considered in this example. Within this simplified example, if the shortest course treated on a particular machine is 35 fractions, output checks every four days suffice to meet the goal of dose accuracy of at worst 5% if weak machine interlocks permit up to a 40% output error. Shorter treatment courses demand more frequent output measurements; for a typical palliative course of ten treatments daily checks are needed and, strictly speaking, for fewer than eight treatments, even daily checks are insufficient to assure that no patient experience a dose output error exceeding 5%. Of course if machine interlocks prevent treatments with dose output error of 5%—and if one believes these interlocks are infallible-output checks would be unnecessary. However, if output dose error must be smaller (e.g., 1.6% to allow for other sources of treatment uncertainty), our simplified model predicts that daily or more frequent output checks are needed for short treatment courses (e.g., less than 5 fraction) even if machine interlocks prevent delivery of treatments with output errors exceeding 5%. Appendix G provides further discussions of this point.141

  • 5.

    Establishing action levels and thresholds. The above analysis is a simplified model of how one could protect the average patient against “outliers” that embody worst-case scenarios, regardless of how unlikely these scenarios might be. However, QM should also seek to minimize overall mean uncertainty of dose delivery by selection of an action level (e.g., the error above which the parameter is readjusted) based on a probability distribution of machine variability. For Linac parameters that exhibit significant random variability, but below the fixed threshold levels, process control charts90 and other statistical techniques121 could be used to distinguish underlying trends from day-to-day statistical fluctuations.

FIG. 7.

FIG. 7.

The number of fractions that can be delivered with a given error plotted as a function of the percentage error in dose per erroneous fraction, for total allowed dose errors of 5% and 1.6% in treatment courses of 35, 10, and 5 fractions. The purple and dark blue vertical lines indicate the two interlocks discussed in the text: a weak interlock where output errors up to 40% can be delivered and a modern interlock which cuts off delivery if the output error exceeds 5%. The red horizontal line is at 2 fractions: for situations that fall below this line, the simple model calls for daily or even more frequent output checks.

9.C.4.b. Other dosimetric and geometric performance endpoints.
  • 1.

    Energy and beam flatness/symmetry. Selection of a sampling frequency requires both sensitivity analysis and assessment of how much deviation from flatness or energy an operable LINAC could exhibit. For many machines, it is unlikely that energy or symmetry errors >10% could occur without concomitant failure of machine output or a sustained effort to retune the machine to operate at the wrong energy. A typical interlock limit for symmetry is 4%. Because the symmetry is the difference between the dose on one side of the field and that on the opposite, a symmetry value of 4% implies that the dose on either side is about 2% off from baseline, or that the symmetry has a sensitivity of 0.5 in dose. Monthly monitoring often means that checks fall within each month, not that checks are performed with no more than 30 days between. This permits a patient who starts treatment soon after one monthly check to receive a full treatment of, for example, 35 fractions, with the unit operating with the 4% symmetry error. However, the dose would only be in error by 2%. One potentially serious failure mode is matching a shift in beam energy with a well-intended (but misguided) effort to retune the Linac so it operates without triggering interlocks. Such actions, which have happened at several facilities, can lead to significant dose errors. A similar failure can occur if steering is accidentally changed, creating a badly asymmetric beam with the ratio of the currents from each side of the dose monitor chamber set as a new baseline. Thus, any intervention involving beam retuning or steering should trigger independent checks of beam-characteristics before returning the Linac to clinical use.

    Simple, nonspecific tests can be very useful for checking such failure modes; verifying constancy of a large field shallow depth beam profiles is a highly sensitive check of all beam characteristics sensitive to beam energy, including depth dose.122

  • 2.

    MLC and jaw calibration and operation. Geometric miss of the target or overexposure of a normal tissue due to MLC problems is potentially a more significant clinical error than a shift in machine output. The common practice of relying on time-consuming measurements for patient-pattern specific MLC verification does not appropriately mitigate all risks of dose delivery errors due to machine performance (see details in Appendix G).141 If such measurements are made, they should be combined with periodic MLC QA tests designed to span the range of clinical practice comprehensively and performed at a rationally designed frequency. These difficult issues are further discussed in connection with FM Rank #153 in Appendix G.141

    Establishing a risk-based, generic MLC QA program requires knowledge of QM goals in relation to the wide variety of possible MLC failure modes such as random positioning errors (leaf-specific), systematic shifts (for the entire leaf carriage), calibration errors, component wear, prescribed intensity variations which drive the MLC to or beyond its mechanical limits or capabilities, and problems compensating for gravity or gantry angle effects. Though random leaf errors have small effects, systematic leaf gap calibration or carriage-positioning errors (affecting an entire leaf bank) can influence delivery accuracy significantly.123–125 It has been shown that 1-mm systematic errors can give rise to dose errors of 5% or more for both dMLC (Refs. 126 and 127) and static MLC (sMLC).128 Using the methodology that led to Fig. 7, it is possible to determine how often one would perform tests to maintain the 1-mm MLC positioning tolerance. Note that the details of appropriate MLC QA tests vary by manufacturer and system design.123–127

  • 3.

    Other parameters. Other machine operating parameters, e.g., radiation vs mechanical isocenter coincidence and excursion, can be analyzed in a similar fashion.

9.D. Additional observations from the TG-100 analysis

  • Appendix G contains a more detailed discussion on this topic.141

  • Table XII provides an illustrative example of the outcome of the TG-100 process with respect to frequencies for various QA tests compatible with the IMRT analysis in this report.

  • It is crucial to develop the capability to actively monitor treatment delivery in addition to performing periodic QA. Automated checks of dosimetry, MLC motion, patient setup, and motion, and other issues during each fraction would help maintain accurate delivery, though this capability is lacking in most equipment (an area that deserves significant developmental effort). Several academic institutions have developed in-house monitoring software, so such capabilities are technologically very feasible and should be made a high priority item for commercial development by Linac and treatment management system vendors.105–110

  • Many delivery errors can be efficiently detected if therapists carefully monitor treatment while in progress. The ACR (42) recommends staffing of two RTTs per treatment unit under “a standard schedule” and states that additional RTTs may be needed for longer hours or heavy patient load. The newly published ASTRO document78 states (p. 14) that “It is recommended that a minimum of two qualified individuals be present for any routine external beam patient treatment.” These recommendations should be taken seriously by administrators. Situations where monitoring by an alert therapist can prevent machine performance from jeopardizing patient safety include verifying MLC motion during IMRT treatment, either between sMLC segments or dynamically during dMLC deliveries, acting promptly on clearly anomalous machine behavior and notifying the physicist about all peculiar or unusual machine behavior. It is incumbent upon the physicist to take such reports seriously, respond when called, and investigate the reported problem. Because attention to the treatment is a major function of the therapists, the console area, workflow, and department policies should be designed to minimize distractions and other pathways for attention lapses.

TABLE XII.

Accelerator QA checks suggested by the TG-100 analysis for the example IMRT process. This table is an illustrative example of the potential outcomes of implementing the TG-100 analysis for the example IMRT process. The TG-100 risk-based approach implies that once commissioning measurements are done and independently verified, certain tests that are now performed annually become unnecessary. That independent verification of the commissioning likely should include performing the measurements at some time after the initial set of measurements and independent verification as an evaluation of stability as well as a reconsideration of procedures.

Treatment unit parameter Frequency of testing TG 100 Frequency of testing per TG 142 Example test
Unit output
Dosimetric constancy At least every 3 or 4 days (for normal fractionation). Daily for few fraction treatments Daily Detector measurement in phantom at depth
Dose linearity with respect to number of monitor units Commissioning or after major repairs (see part 1 on commissioning) with at least one, preferably independent, repeat verification Annually Detector measurement in phantom at depth, IMRT vs normal delivery
Dosimetric constancy as a function of dose rate Commissioning or after major repairs, at least one, preferably independent, repeat verification Monthly Detector measurement in phantom at depth to assess dosimetric constancy at all dose rates used
Dosimetric constancy with respect to gantry angle Commissioning, check after repair on bending magnet or beam alignment; at least one, preferably independent, repeat verification Annually Periodically perform the dosimetric consistency test with lateral beams or under the table
Stabilization for small monitor-unit settings Commissioning or after major repairs, at least one, preferably independent, repeat verification. Also limiting the use to range of stable no. of MU, check after major beam tuning Detector measurement in phantom at depth
Beam characterization
Flatness and symmetry (beam profile) Commissioning, then performed together with output constancy check using measurement device with off-axis detectors Monthly 1-D or 2-D detector measurement in phantom at depth; for checks, one off-axis point for each axis
Beam energy Commissioning, then performed with the output constancy check using flatness Annually (monthly for electrons) 1-D or 2-D detector measurement in phantom at depth; for checks, one off-axis point for each axis
Collimation
Positioning and calibration of MLC Daily operational checks, at least weekly picket fence or similar IMRT-related tests Weekly picket fence; monthly non-IMRT patterns and IMRT leaf position accuracy Preferably image-based checks. Use light field with template if imaging is unavailable
Consistency of MLC with gantry orientation Commissioning, and QA check with frequency dependent on sensitivity determined at commissioning Monthly Preferably image-based checks or light field (if accuracy validated)
Speed of MLC movement (if relevant to IMRT delivery method) Commissioning, then routine confirmation of speed and delivery accuracy. Frequency required by risk-based analysis not clear yet Monthly Preferably image-based checks or light field (if accuracy validated)
Accuracy of the secondary collimators Commissioning, then observation with the output checks Daily Shadow of jaws compared with a template using the light field for large and small fields
Beam positioning
Accuracy of gantry angle For simple isocentric treatments, monthly is probably adequate. For off-axis or VMAT-type IMRT deliveries, weekly or daily may be necessary Monthly Light field consistency with marks on wall and floor or bubble level
Accuracy of collimator angle For complex IMRT and VMAT deliveries, weekly or even daily checks are important Consistency with marks on floor or bubble level
Accuracy/consistency of the couch position Insufficient information to specify, depends on type of setup used. Based on history and usage for a particular facility and whether IGRT is being used Annual couch rotation, couch translation not addressed Consistency of readouts with the couch positioned placing the cross hairs (gantry pointing down) sequentially on two marks on the table and with two settings on the ODI
Laser accuracy Daily if laser setup used. Study clinical use to determine frequency if all setup is done with IGRT Daily Consistency marks on wall and floor

The following gives a discussion of the additional five top ranked failure modes.

9.D.1. Failure mode #11

Rank RPN Step# Process Step
#11 283 40 6. Initial Tx plan directive Specify images for target and structure delineation
FM: Specify incorrect image set (viz. Wrong phase of 4D CT, wrong MR, etc.)

Specifying image sets to be used for target delineation, particularly when they are obtained outside a radiation oncology department, is a serious potential source of difficult-to-detect (high D) errors in the planning process. In many centers, this process consists of the attending physician or resident reviewing the patient’s imaging studies using the radiology PACS system. The desired image set is then identified and its study number is passed to dosimetrists who contact the appropriate radiologic technologist and request them to export the desired DICOM dataset into the RTP file server. There are many potential sources of error, including propagation of an incorrect ID number, miscommunication (between therapist, dosimetrist, physician, and radiology technologist), and the possibility that the radiology technologist will export an incorrect image set. Because of the growing number and variety of MR and PET imaging studies that are used for planning, the dosimetrist and physicist cannot, on their own, verify the correctness of the secondary image sets imported into the planning system. Only if the physician notices that an incorrect dataset has been selected will the error be detected.

Ways to change the process to decrease the likelihood of this failure mode include:

  • 1.

    Obtain a modern PACS system which allows the physician to directly download the desired studies when they are viewed. This part of the solution requires a high-level managerial decision, but, if implemented, eliminates opportunities for miscommunication.

  • 2.

    Expand the site-specific protocol for workup of a patient prior to IMRT planning [example checklist of Table IV to include the technique factors (e.g., MR pulse sequence, contrast, patient position, volume)] to be used for each major clinical site and presentation. This will provide a basis for verifying the image datasets selected for planning.

  • 3.

    Develop, and require the physician to complete, an online form that not only identifies PACS study ID, but also the date of the procedure and imaging technique desired.

  • 4.

    Require the dosimetrist to verify that the imported secondary dataset is consistent with items 2 and 3.

As illustrated by the FTA, a QC check placed prior to importing DICOM images from the PACS server (where full technique information is available) to the RTP (where full DICOM header information is not reviewable in the model for the facility considered) reduces the probability of error.

A review of the FTA indicates that many of the initial planning directive error pathways can be managed by the same strategy: a dosimetrist performs QC checks of the inputs into the planning process by comparison against the treatment protocol, while the more comprehensive downstream physics checks of the plan use the same information for the final QA check. For example, to make an error in the secondary-to-primary image registration more detectable (FMEA step 43, rank 23), the treatment protocol documents should specify the standard registration process to be used for the given clinical site (which image set is primary, type of registration used, e.g., manual vs automated, which landmarks to align, etc.) This places the onus on the attending physician to request and document variances from the standard procedure where medically indicated. It should be noted that primary image set selection errors are not limited to cases where additional secondary imaging studies must be imported. It is not uncommon to have multiple CT simulation datasets (e.g., repeat exams for changing medical condition, adaptive replanning or to correct a simulation error). Other errors that can be intercepted with this strategy include incorrect specification of goals and constraints (FMEA step 22, rank 140) and treatment planning approach/parameters (FMEA step 45, rank 84). The treatment protocol can be implemented as a patient-specific form to be inserted in the patient’s chart. Default or standard choices (e.g., DVH planning or evaluation constraints) would be printed in the form, so if the physician wants to modify these values, the default number is crossed out and the physician-specified number written by hand. This eliminates transcription error characteristic of a form with simple blanks. On the other hand, it introduces the potential failure of using the default in error because the physician neglected to make a change.91

9.D.2. Failure mode #14

Rank RPN Step# Process Step
#14 278 44 6. Initial plan directive Motion and uncertainty management (includes PTV and PRV)
FM: Specify wrong motion-compensated Tx protocol, specified margin size inconsistent with motion management technique, specified duty cycle and breathing phase inconsistent with margin for gating

For institutions that take a sophisticated approach to respiratory motion management, a detailed and comprehensive policy on 4D motion management is an example of a clinical procedure that should be well documented in the appropriate lung or upper abdominal tumor site specific protocol (example checklist of Table IV). This protocol should include indications for 4D vs 3D planning CT, indications for using specific respiration sensors or surrogate breathing motion markers, criteria for gated vs free breathing treatment, and which images (MIPS, slow CT, breathing phase CT closest to average) are to be used for internal target volume (ITV) creation, dose calculation, and for generation of reference digitally reconstructed radiographs (DRR).129 Without specific policies, there is no way to assure that correct methods are being used. Numerous issues are directly involved in registering 4D images, e.g., which landmarks are to be used by therapists in performing online registration of daily gated radiographs and reference DRRs, the accuracy which is to be expected or demanded, and what to do when expectations are not achievable. All these should be addressed in this protocol. An explicit check of the registration used for planning, at the end of the RTP anatomy step is an important QA step, and is part of example checklist of Table V, “Preparation of patient data set for treatment planning.” Review of the FTA and FMEA potential failure rank #14, step 44, “Specify wrong motion-compensated Tx protocol” reveals four different error scenarios for failures involving motion compensation:

  • 1.

    The physician specifies an incorrect approach to uncertainty management (e.g., failing to order intrafractional imaging for a frameless SRS treatment).

  • 2.

    The motion management protocol is correctly selected, but planning specifications (e.g., PTV margin) are inconsistent with the protocol.

  • 3.

    The physician correctly specifies the motion management technique and consistently specifies other planning/treatment directions, but, downstream physics, dosimetry, or therapist actions are not consistent with policies underlying the written directive (e.g., the wrong CT image set is used to generate the reference DRRs).

  • 4.

    Motion management, all associated planning directives, and all subsequent technical actions are consistent with procedures but the motion management technique is inadequate or overly conservative compared to the actual geometric uncertainty characteristics of the patient or relevant population of patients.

Intercepting errors arising from scenarios 1–3 can be accomplished by written procedures as part of those described above and in example checklists of Tables IV and V that clearly identify indications for gated treatment including immobilization, setup, intrafraction motion monitoring, and planning procedures. The treatment protocol allows the dosimetrist to perform QC on the inputs to treatment planning and subsequent steps. This check should detect variances from established policies and provide a mechanism for negotiating either compliance or a documented variance with the attending physician. The Task Group also recommends incorporating review of motion and uncertainty management techniques into the physicist review of treatment plans.

Scenario (4) arises not from a random procedural error or mistake but systematic errors due to inadequate commissioning of the motion management process. Reducing incidence of motion management failures is discussed in relation to the step 205 (rank 8) in Appendix G.141

9.D.3. Failure mode #24

Rank RPN FMEA# Process Step
#24 240 189 11. Day 1 Tx Set treatment parameters
FM: Wrong Tx accessories (missing/incorrect bolus, blocks)

Rank 24 is the first FM to appear for the initial treatment session (Day-1 treatment). Day 1 includes the first day of planned changes within a single course of treatment: examples include cone-downs, field changes done in response to peer review or patient changes, and the introduction of a new treatment site concurrent with an on-going treatment. As the adaptive radiation therapy paradigm becomes more prevalent, the number of Day-1 sessions per patient is likely to increase. As with several earlier FMs, it makes sense to look at the whole Day 1 part of the process tree (Steps 174–189 in the FMEA spreadsheet) and the associated fault tree together. Note that failures that can occur on other treatment days are considered in the “Day N Treatment” FMs.

The major concern for the Day-1 treatment is establishing or verifying the treatment parameters that will be duplicated through the entire treatment course, since errors that are not detected at Day 1 may become systematic errors that will affect many or all of the treatments. There are many issues that must be handled within the QM program associated with Day-1 treatment, including the following. Many of these issues are addressed in the Day-1 treatment checklist described in example checklist of Table VII.

  • Parameters (e.g., couch positions, shifts from patient reference marks) and treatment accessories (e.g., bolus) may be defined or added to the plan during the first day’s treatment in a way that circumvents the standard flow and checks of the treatment preparation process. The Day-1 QM must verify the correctness of these additions, and assure that they are correctly continued through the treatment course.

  • A QM check of the entire treatment delivery script before treatment is crucial. The “when, how, and who” for the performance of these checks depends on the details of the process used for preparation, plan download, and Day-1 setup and verification. In all cases, though, the QM system must ensure that all parts of the plan are validated before treatment.

  • It is essential to confirm that the correct patient and the correct treatment plan have been selected, endpoints addressed by the time-out process required by the Joint Commission. Although there is no specific guidance that a time out should be required for each treatment session, it is a good idea. During the time out, the patient’s identity and treatment site, especially laterality, are confirmed and any changes in patient condition that have bearing on the treatment are noted and conveyed to the physician. It is also confirmed that the correct files have been opened in the delivery system and the correct instructions in the paper or electronic record, and that prescribed changes in treatment have been addressed (with proper signatures in place). Especially at Day 1, radiation oncologist participation in the time-out process is a guard against deviations from physician intent.

  • The entire Day-1 treatment process should be structured by written departmental procedures that clearly identify the parameters to be validated before treatment, and should be part of the training for all new physicians, physicists, dosimetrists, and therapists. All staff involved should understand the patient’s treatment plan and associated treatment parameter tolerances. Individuals associated with each patient’s treatment should be clearly identified.

  • The initial imaging session and resultant marking of the patient or accessories may set the standard for the patient’s position for the whole treatment course. Each facility should develop a policy describing the process for Day-1 imaging, setup, and verification of patient position and treatment isocenter(s). Patient-positioning errors need to be corrected via the appropriate image guidance strategy. For some disease sites, this refers to traditional weekly portal and orthogonal field imaging. With proper training and protocols, daily positioning corrections implemented by therapists based on image guidance with off-line review by the radiation oncologist can improve setup accuracy. However, if an incorrect isocenter placement remains uncorrected over much of the treatment, or if anatomy is misidentified or misinterpreted during the Day-1 procedure, a high severity treatment failure may result, indicating the need for QM procedures to mitigate this risk. One pragmatic approach defines classes of treatments with set tolerances, for example, a hypothetical protocol may allow prostate patients, as a class, up to 2-mm discrepancies between the DRR and beam image, while lung patients may be allowed up to 5 mm. Defining such classes of patient beforehand removes ambiguity and possible errors at the time of imaging.

  • Depending on departmental policy, the monitor-unit setting per segment (or the equivalent for other forms of IMRT) is validated prior to the first treatment by a second, independent calculation program or by measurements. Assuming that the validation methodology had been thoroughly commissioned, at or before Day 1, the medical physicist need only review that the independent check fell within department-specified tolerances.

  • For both Day 1 and Day N treatments, the human factor of “inattention” was frequently identified as a cause of failure. Treatment sessions can become repetitive exercises for therapists and it is difficult for any individual to remain alert at all times. Training, policies, and managerial actions (sufficient staffing to allow for short breaks, rotating therapists between machines to keep them fresh) are partial solutions, but an additional layer of technical protection would be a much stronger and more effective approach. The TG recommends that manufacturers develop techniques to address such verification, such as a method of comparing records of the MLC positions in real-time through the treatment to the pattern in the treatment plan.

Many of these issues are addressed in example checklist of Table VIII which suggests QM checks for an initial treatment day. See also the QM suggested for Day-N treatment failure modes, since those types of failures can also happen on the first day of treatment.

9.D.4. Failure mode #32

Rank RPN Step# Process Step
#32 229 207 12. Day N Tx Tx machine and peripheral hardware setup for Tx
FM: Changed prescription dose (and MU) occurring after initial Tx and not entered into chart and/or treatment unit computer

This FM is illustrative of issues that arise at the boundary of human and technological systems. In addition to this FM, there are a number of other FMs for Day-N Tx that have high RPN numbers including step 208 rank 34, step 202 rank 40, step 206 rank 42, step 204 rank 63, step 203 rank 152. The higher ranking failures involve incorrect data being used for treatment; changes made but not entered correctly or not entered at all into the delivery system computer, or changes made inappropriately. The lower ranked failures involve software or hardware failures. Many of the human failures have the same causes: lack of standardized procedures, inattention, inadequate training, and lack of communication.

Prevention of these types of problems requires QM using both technological and human factors methods. The following list gives some recommended QM measures:

  • Independent checks of all delivery-system treatment-plan parameters against those originally approved for use. A QA check of all this information is critical, and some kind of check has long been part of the weekly physics check, though weekly checks are clearly inadequate for some hypofractionated treatments. Modern treatments contain very many treatment parameters, so developing robust automated checks of this information is crucial.

  • Methods to flag changes in delivery parameters and prevent further treatment until review and approval are performed. Such a feature exists in at least one modern treatment management system with regard to major delivery parameters: it should become a universal feature.

  • Procedures to assure the consistency of daily treatment with the approved prescription(s) and plan(s) are crucial. Typically, this has been an important part of the weekly chart review (see example checklist Table VIII) by the medical physicist and/or dosimetrist and separate weekly review by a therapist, both of which should include checking the consistency of the daily treatment record(s) with the most current physician prescription(s). This check should also verify that all treatments have been correctly recorded in the official record, whether paper, electronic, or a combination.

  • Methods to draw attention to unplanned changes, as well as expected changes that do not show up. Detecting unfulfilled change orders is often difficult if the orders are verbal or poorly documented. Especially in a combined electronic and paper environment, such issues can be relatively common if there is no established and uniform procedure. For any system, electronic, paper, or combination thereof, the process for making changes and triggering the appropriate QA checks of the change must be rigidly designed and followed. Though the common QA practice (also tied to billing and the recommendations of TG-40) of weekly paper and/or electronic chart checks helps detect some of these failures, this check is not adequate for many clinical scenarios.

  • Dissemination of warnings about nonstandard behavior of the treatment system or involving the patient to appropriate staff for timely investigation; an anomalous condition should not be allowed to persist long enough to adversely affect any patient.

  • Policies that establish electronic and procedural “permissions,” so that change approvals are performed by appropriate staff.

Lower ranked failures include the treatment unit computer not loading the patient’s file correctly (after having done so on Day 1) or file corruption. Assuming that the file corruption does not bring the treatment to a halt or trigger software messages, such failures can be extremely difficult to detect. Therapist monitoring might detect incorrect MLC movement, but many cases would not be apparent. Much of the ability to detect or prevent such problems relies on good software design. For example, use of file checksums to confirm validity of files can increase the detectability of such problems, and should be encouraged. Other developments involving automatic monitoring (see “Real time QA during treatment delivery” in rank 3) would also address this FM.

9.D.5. Failure mode #153

Rank RPN Step# Process Step
#153 130 203 12. Day N Tx Tx machine and peripheral hardware setup for Tx
FM: MLC files (leaf motion) corrupted

Our final example addresses a much lower rank failure mode. This issue, though ranked 153rd, is a high severity failure mode that involves corruption of an MLC file used to control MLC leaf motions required to deliver an IMRT plan. Such a corruption might present as an unreadable file, which might prevent a failure at this step, or it could present as an empty file that might be used for treatment. The most hazardous situation involves plan revision on Day N, as in a recently reported case40 in which the MLC trajectories were lost during transfer of a revised plan. Since the MLC trajectory information was missing, open fields were treated with the IMRT MU, resulting in severe overdoses to the patient. There are also anecdotal (but undocumented) reports of incorrect leaf motions in previously treated but unchanged plans. For this failure to occur, the file must be accepted by the machine as valid, but contain an incorrect set of leaf sequences. While the likelihood of occurrence may be low, such errors clearly can occur, even though their true frequency is unknown. Because of its high severity ranking, the QM program should address this FM. Testing all fluence patterns to confirm their correct behavior and resulting intensity distributions before any clinical use is crucial. While some institutions have substituted calculational checks for a physical plan delivery, such checks must be accompanied by strict QA of the generic performance of the MLC. Even with these, it is incumbent on the institution to verify that this combination of checks can really confirm the correctness of a delivery based on a given MLC description. In the absence of such verification methods, physical deliveries of any new fluence pattern and confirmation of their correctness, when delivered on the machine, is recommended by this task group. We also note that pretreatment verification does not address potential Day N delivery problems. Checking the leaf pattern daily could provide protection from the most serious effects on the patient, but for a facility with even a third of the patients under IMRT treatments at any given time, this check would consume a great deal of time and be clinically unfeasible. The TG urges vendors to provide automated tools to avoid this daily problem (e.g., use of checksums or other automated checks, automated comparison of EPID dose or dose back-calculated from MLC log-files with corresponding calculations from the TPS or with pretreatment data), that might demonstrate the MLC descriptions were identical and unchanged day to day.130–135 Similar automated checks for all files related to a patient’s treatment are also necessary. At present, there is no widely available procedure to prevent this potential failure.

9.E. Quality management program components

Design and implementation of a quality management program based on all the information generated in an FMEA or FTA analysis of this scope is a large and complex task, the details of which are highly dependent on how the planning and delivery processes are implemented in the institution, as well as on the types of treatments administered. Thus, a QM program should be individualized to the relevant processes, case mix, methods, and equipment used in each center, and it is not expected that one standard set of QM guidelines and methods will be appropriate for each clinic.

However, the analysis presented in this work highlights a general set of QM requirements and needs. To efficiently encompass the QM tasks which have been identified, it has become clear that collecting appropriate quality assurance, quality control, and procedural tasks into a set of recommendations will help organize the QM to help increase safety and the quality of treatment for patients treated with IMRT (and other similar techniques).

Note: These checks are proposed as a starting point, for consideration and incorporation into the QM program defined by each institution. These recommendations are not complete, nor appropriate for all situations.

Most of the items in the following tables are found in other guidance documents.1,2,72,136–139 However, the TG feels it is useful to collect these items in list form and in the order in which they might occur in a typical clinical process. For TG 100, this served as a welcome reality check both on our process and on the intuition and experience that informs current community best practices.

One of the general results of the FMEA and associated FTA is the clear need to define site-specific treatment planning and delivery protocols that serve as the basis for simulation, planning, and treatment delivery expectations, methods, and QM procedures. This general standardization and documentation of the methods to be used addresses many of the most common failure modes for many of the most critical steps in the planning and delivery process, and are a crucial way to avoid training and procedure lapses. Example checklist of Table IV summarizes issues, procedures, decisions and QM that should be defined for each clinical site-specific pretreatment workup protocol.

Many of the QM procedures suggested by FMEA and FTA have been grouped into sets of checks that occur at key points in the process; example checklists have been created for these sets.

Example checklist of Table V deals with issues related to preparation of the patient dataset for planning that is suggested by the TG-100 FMEA. These items should be confirmed or verified before moving from the anatomy definition task to treatment planning. This relatively new and uncommon set of checks is sometimes incorporated in the final planning check or is addressed by an even further downstream peer review session (for example, chart rounds) after the first week of treatment. IMRT makes this check crucial, since problems in the patient anatomical model or the initial directive can lead to major errors in IMRT planning, and correction of those errors at the end of the IMRT planning process is very inefficient, and often unlikely, since a great deal of work must be redone. It is even more crucial for hypofractionated treatments, which may be completed before the next chart rounds. All anatomy definitions and preliminary planning instructions should be checked, mostly by the physician but also by the planner, to assure that an accurate model of the patient is used for planning.

The next logical checkpoint in the planning/and delivery process occurs at the end of treatment planning, as the plan is approved, finalized, and prepared for treatment. The treatment-plan check has been a staple of standard QM programs and a recommendation endorsed by TG 100. Example checklist of Table VI presents many of the issues to be confirmed or verified as part of the plan check. It is important to note that this involves much more than a simple check of the mechanics of the plan (MUs, gantry angles, etc.). The ability of the plan to satisfy the goals of the treatment-plan directive and the ability of the plan to be delivered safely must be reviewed by someone, typically the physicist, independently of the physician who approves the plan for use and the planner who generates the plan. Guidance for the physician’s review, evaluation, and approval of the plan for clinical use is also a crucial part of any good QM process.

In the days of paper charts and manual treatment setup, assuring that the correct treatment planning information was written in the chart was adequate preparation for the start of the patient’s treatment. However, in modern radiotherapy there are many steps in the process that occur between the evaluation and approval of the plan for treatment, and the actual delivery of that plan to the patient on Day 1. Preparation of the detailed treatment prescriptions, transfer (and perhaps transformation) of the planning-system information onto the treatment-management system and then to the computer-driven treatment-delivery system all occur before the patient arrives. There is also a complex process involved with setting the patient up for their first treatment, including confirmation of setup, and often the use of image guidance to position the patient, and document the correctness of the patient’s position and treatment plan. Example checklist of Table VII describes issues that must be incorporated, in some fashion, into the Day-1 treatment process.

Finally, routine checks of treatment progress are necessary throughout the patient’s treatment course. Though this often is called the “weekly chart check,” this check involves more than the chart, electronic, or paper. The accuracy of patient setup, delivery, dosimetric recording of the treatment information, documentation and correctness of image guidance decisions, all must be confirmed regularly during the treatment course or significant errors can continue or propagate throughout the treatment, causing unrecoverable harm. A number of issues to be confirmed are described in example checklist of Table VIII though certainly a much broader array of items can be appropriately included in these checks for specific treatment protocols. For example, for patients treated with motion management, such as gating, or breathing control or implanted transponders, additional checks not listed in example checklist of Table VIII are needed. The frequency of all Day N checks is also crucial. A 10 Gy/fraction hypofractionated treatment needs checks at each treatment, since many errors occurring more than once could lead to significant and unrecoverable toxicity. It is also important to note that the methodology for these checks needs a great deal of new development, since accomplishing these checks efficiently and effectively with current treatment management systems is often quite time-consuming and difficult. Many new features and techniques are necessary here to make these checks as complete and efficient as they need to be.

9.F. Summary and conclusions: IMRT example

The creation and maintenance of a complete quality management program for clinical use of any complex technology, such as IMRT, requires detailed analysis, continual improvement, development of increasingly more effective QM measures, and continuous attention to both details and the overall goal of achieving safe and effective patient treatment.

In Sec. 9 and in Appendix G (Ref. 141) we have used failure modes and effects analysis and fault tree analysis methods to study a generic planning and delivery process for IMRT to illustrate how to apply these general tools to a complex radiotherapy technology and formulate a more comprehensive IMRT quality management program that more effectively promotes quality with more efficient use of available resources. The generic nature of the analyses and the task group mechanism do not make it possible to give complete guidance for any single specific clinical implementation. However, the recommendations of the task group should be a guide to individual institutions as they apply these techniques to their individual processes.

The FMEA and FTA methodologies described in Sec. 6 were used to determine the most likely points of failure and to construct a model QM program for one example generic IMRT treatment process. Analysis of the types and causes of failures and their relative severity (S), likelihood of occurrence (O), and lack of detectability (D) assigned by the TG members was used to order failure modes by risk and severity. This ranked list of failure modes was then analyzed to determine, or at least identify, quality management steps that would mitigate those failure modes.

Aside from the many QM recommendations produced by the analysis, a number of “key core components” for quality were identified. Their absence in the QM program significantly increases the likelihood that a large fraction of the failure modes identified will actually occur. The key core components that any safe and high quality IMRT program must include are:

  • Standardized procedures.

  • Adequate training of staff.

  • Clear lines of communication among staff.

In addition to these, other components essential for quality treatments include:

  • Maintenance of hardware and software resources.

  • Adequate staff, physical and computer resources.

Regardless of a department’s treatment process or methods, the TG expects that individual physicists will identify various individual potential failure modes that are not considered in this work, from their own experience of incidents or near events. These issues must be included in future analyses, so that the QM program for IMRT (and other techniques) continues to become more successful at preventing safety and quality problems. It is essential that all members of the radiation therapy team continue to enhance the quality of the QM program, and to continually update and enhance the QM suggested by TG 100 for their own IMRT practice, as well as extending the methodology to other types of external-beam and brachytherapy treatments.

10. CONCLUSIONS

Modern-day radiation therapy techniques enable the delivery of highly conformal radiation dose distributions to clinical target volume(s) while sparing the surrounding normal tissues. However, this improvement comes with increased complexity, price, and potentially risk. A major component of the increased price and risk lies in the complexity of advanced technology radiotherapy planning, delivery, and clinical workflow and the resultant expenditure of time and resources for QM. It is clear from the published literature and from the work reported here that there are many sources of error that contribute to dose uncertainties which can potentially harm a patient or negate the treatment benefits.

The complexity of modern day radiotherapy planning and its accurate and safe delivery arises from many factors, including (a) the fact that radiation therapy consists of many complex subprocesses, each with its own uncertainties and risks, and which must be accurately executed and safely handed off to prevent error propagation; (b) modern dose delivery techniques (e.g., IMRT, SRS, SBRT) have many more degrees of freedom (e.g., leaf sequences) to manipulate the dose distribution than corresponding techniques of earlier eras (e.g., three-dimensional conformal radiotherapy), greatly increasing device complexity and the number of potential error pathways; and (c) modern treatments are planned on the basis of a 3D anatomical model derived from medical images, making the treatment delivery accuracy highly dependent on image quality and the correct interpretation and use of the imaging information. The probability of severe target underdose or normal tissue injury increases with increasing demand for dose conformality and normal-tissue avoidance. Mitigating the risk of actualizing these potential error pathways and thereby adversely impacting treatment quality or injuring patients can only be achieved by carefully designed and documented clinical workflow that encompasses not only physicists but the entire team of professionals consisting of physicians, dosimetrists, therapists, nurses, and administrators, and a quality management program that takes as its goals the correct operation of devices and correct execution of the planning and delivery processes.

TG 100 concurs with previously published QA guidance and community consensus that quality assurance test procedures and tolerance limits for the performance of radiotherapy planning and delivery systems should be dictated by the requirement to reduce overall uncertainty (random and systematic) in delivered radiation dose to a patient to less than 5%.140 One motivation for the work of TG 100 is that current QA guidance typically does not expend sufficient effort on preventing low probability “catastrophic” events39,41–43 which pose very high risks to individual patients (random or sporadic events) or to groups of patients (systematic events). Sporadic catastrophic events, e.g., treatment of IMRT fields without movement of the MLC leaves, can entail interactions between users and device interfaces and often may be caused by upstream user errors that lead very wrong input data to be propagated through the planning/delivery process, rather than by erroneous functioning of one device itself. The process-oriented QM proposed in this report attempts to make avoidance and detection of such events an important priority, in the tradition of several recent AAPM task groups (TG-59, process-oriented sections of TG-56, and TG-135) which address treatment safety. Continued development and application of the risk-based quality management methods discussed in this report to clinical radiotherapy processes should help improve the overall safety and quality of the radiotherapy process, and make possible more efficient methods for the mitigation of safety hazards and quality limitations throughout the RT process.

ACKNOWLEDGMENTS

Members of TG 100 would like to express their sincere gratitude to Paul Medin for making many contributions during the early phase of the activities of the Task Group. The authors would also like to thank Li Zeng, Silas Bernardoni, Andrew Dolan, and Bo Zhao for their help with the FMEA and fault tree analysis presented in this report. Special thanks also to all the reviewers from the Quality Assurance Subcommittee, Therapy Physics Committee, Science Council, Professional Council, Ad Hoc Committee for the implementation of TG 100 report, and many others who performed a review of the document in confidence.

APPENDIX A: PRACTICAL GUIDES TO PERFORMING FMEA, FTA

1. Guidelines for applying risk assessment, industrial quality management tools, and techniques

a. Performing a process analysis and risk assessment

1. Define the process

  • Assemble a crossfunctional team from the organization to select a process. Choose a process that can be improved significantly through the analysis. Choose a process that is problematic, complex, difficult, and new, that is potentially hazardous, etc.

  • Assemble a crossfunctional team familiar with the process. All individuals who participate in the process should be invited to be a member of the team. Getting as many different perspectives of the process as possible is important.

  • Develop a process map, flow chart, or process tree of the process.

A visual representation or “picture” showing the entire process can be very useful. People involved in the analysis (and the process) can see how what they do fits into the overall process and gain an understanding of what is done upstream and downstream from their part of the process. That knowledge and insight often results in creative process improvement ideas.

2. Perform a risk assessment of the process using FMEA

  • Ideally, the same crossfunctional team that developed the process tree, flow chart, or process map should participate in the FMEA. Each FMEA team should have a facilitator, preferably someone not intimately involved in the process under review.

  • Figure 8 shows a conventional FMEA form. Most organizations use this form or a modified version of this form to guide their FMEA efforts.

FIG. 8.

FIG. 8.

Traditional failure modes and effects analysis worksheet.

Steps in performing an FMEA.

  • Step 1.

    List each process step defined in the process tree/flow chart/process map.

  • Step 2.

    Identify each potential failure mode for each process step. A failure mode is defined as the way a failure occurs, is observed or the way in which a process step can fail to meet its intended purpose. Each step in the process could and usually does have several different failure modes.

  • Step 3.

    Identify the potential causes of each failure mode. Each failure mode can and usually does have several potential causes. The use of root cause analysis tools such as fish bone diagrams or affinity diagrams can be helpful in completing this step.

  • Step 4.
    Identify the potential effects or results for each failure mode if it were to occur and not be detected. Normally there are three levels of effects for each failure mode.
    • Local effect—the effect of the failure mode at the process step level.
    • Down stream effect—the effect of the failure mode on the next step downstream from the process step being analyzed.
    • End effect—the effect of the failure mode at end point of the overall process being analyzed.
  • Note. The prescribed method for defining the effects of a failure mode requires identifying three different levels of effects. Many organizations, however, only identify the end effect. This is an acceptable alternative practice and can be less confusing than the prescribed method of identifying three levels of effects.

  • Step 5.
    Identify current process controls. There are three basic categories of process controls, actions that have been taken that will:
    • Prevent the occurrence of the cause of a failure mode.
    • Detect the failure mode before it produces the end effect.
    • Moderate the severity of the results if a failure mode occurs.
  • Examples of process controls include inspection and other quality control measures, training, work instructions, and performance monitoring.

    If the intent is to evaluate the utility of the current control, the FMEA should be performed ignoring the controls and seeing if the analysis indicates they should be used.

  • Step 6.
    Determine the likelihood that the process step will fail and result in some problem. Two independent factors that contribute to this likelihood are used to make this determination.
    • Occurrence—the likelihood that the cause of a failure mode will occur and result in the failure mode.
    • Detection—the likelihood that a failure mode will not be detected, when it occurs, before causing any significant or serious end effects.
  • Determine the seriousness of the end effect resulting from the failure mode.
    • Severity—the severity of the end effect for a specific failure mode, given that the failure mode did occur.
  • Each of the three factors is ranked on a scale from one to ten, with ten being the worst case scenario. TG 100 developed customized ranking scales relevant to their processes (see Table II).

  • Step 7.

    Calculate the RPN for each failure mode, cause, and effect combination. The RPN is the product of the three factors, occurrence, detection, and severity. High RPNs indicate process weakness or potentially hazardous process steps.

  • Step 8.

    Identify the process steps with the highest RPNs and severity values. There is no standard convention for this step.

    Process step, failure mode combinations that have a high severity ranking also require corrective actions even though their individual RPN might be relatively low. A process step with a serious end effect needs to be evaluated for potential corrective action regardless of its likelihood of occurrence or its detectability. Even though the probability of the failure mode occurring and the likelihood of it not being detected might be low, there is always a small chance that it might occur and not be detected thus resulting in a serious end effect.

  • Step 9.
    Develop and implement additional process controls for those process steps, failure mode, and cause combinations that have the highest RPNs or high severity rankings. These new process controls should focus on what can be done to:
    • Reduce or eliminate the causes of failure modes.
    • Increase the probability that the failure mode will be detected before a serious end effect occurs.
    • Moderate the severity of an end effect if a failure mode does occur.

A fault tree of the process helps the team appreciate the propagation of errors. The fault tree should be linked to the FMEA. Each end effect from the FMEA becomes an undesired event at the top of the fault tree. Each failure mode is then listed at the next lower level of the tree, and the FMEA causes are listed in the next lower level of the tree. The RPNs for each failure mode, cause and effect combination from the FMEA should be identified on the fault tree (at the lowest level of the tree). This fault tree/FMEA combination diagram provides a visual representation of the FMEA analysis. It allows the group completing the analysis to see critical nodes in the fault tree where corrective measures can prevent the propagation failure modes leading to undesirable events or end effects. This fault tree/FMEA diagram also makes it easier to see the most frequently occurring causes across failure modes, which might indicate an organizational weakness. For example, if there is a preponderance of causes with high RPNs related to inadequate training, the organization should consider making significant improvements to their training program.

APPENDIX B: AN INTRODUCTORY EXERCISE FOR PROCESS MAPPING, FMEA, FTA, AND QM DESIGN FOR IMRT TREATMENT PLANNING

1. Process mapping

a. Learning objective

The goal of this exercise is to develop a simple process map for IMRT treatment planning, from the time the dosimetrist receives the final region-of-interest contours from the physician to the time the plan is ready to be treated.

b. Exercise

Below is a rough guide that can be used as an outline for this exercise, as well as some tips for creating useful process maps.

Step 1: Decide what process to map. The scale of the process is an important concern here. Mapping the entire external beam radiation oncology process, for example, is a large project that could take many weeks.

Step 2: Form a group and identify a team leader. Normally we would include a representative from all professional groups involved, but in the context of this exercise, this is not possible.

Step 3: Create an initial process map. It is often useful to make a first draft that does not attempt to capture the entire process in detail but rather the workflow at a more general level.

Step 4: Iterative mapping. Refine the process map, adding levels of detail as necessary.

Step 5: Use the process map as the basis of the FTA and FMEA exercises.

Tips for creating useful process maps.

  • 1.

    It is often useful to look at processes from the patient’s perspective.

  • 2.

    For clinical processes, a multiprofessional team is necessary for the development of a valid map.

  • 3.

    The number of subprocesses identified should be the smallest number to meet the objective.

  • 4.

    The users of the map should have the same understanding of the meaning of the subprocesses.

  • 5.

    Choose the right level of detail. A map that is too general loses its utility, while one that is too detailed becomes unmanageable and staff members lose the big picture.

  • 6.

    Do not get hung up on fancy graphics. There is value in the process of creating the map.

2. Failure modes and effects analysis

a. Exercise objectives

After this exercise, the team should be able to perform a basic failure mode and effects analysis and identify risks or hazards for a given process (Fig. 9).

FIG. 9.

FIG. 9.

Example of process maps.

b. Exercise overview

The team will complete a FMEA for a step(s) identified in the process tree segment for intensity modulated radiation therapy below (Fig. 10). “Evaluate Plan” will be used to generate FMEA and FTA examples.

FIG. 10.

FIG. 10.

Treatment planning segment from a process tree describing IMRT process.

Steps:

  • 1.

    Form a team. Teams familiar with the process being analyzed always produce a higher quality FMEA than an individual.

  • 2.

    Select one of the steps from the treatment planning process tree segment and use the table below to perform a FMEA on that step.

  • 3.
    Performing the FMEA (Fig. 11)
    • a.
      List the process step your team selected.
    • b.
      Identify ways in which the process step can fail (failure modes). List at least four. In order to minimize confusion, your team should use a consistent process for identifying failure modes, such as always define failure modes in terms of specific process failures. For example, for delivering a specific prescribed dose of radiation failure modes should include dose delivered to the wrong location, too little radiation delivered, and too much radiation delivered.
    • c.
      For one of the failure modes you identified, list several causes that could result in that failure mode. It is important that you list causes that could occur and not limit the analysis to causes of failure modes that your team thinks are likely to occur. Typical causes of failure modes include but are not limited to the following:
      • Lack of formal and written procedures, work instructions, or work methods.
      • Inadequate training.
      • Insufficient time to complete a task due to other tasks requiring attention.
      • Equipment or software malfunction.
      • Stressful work environments leading to mistakes.
    • d.
      Identify the potential effects that could result when the failure mode occurs. It is important to identify the worst possible outcome of a failure mode. Your team should not consider how likely an effect is to occur. Very serious effects could occur as a result of many failure modes in radiation therapy.
    • e.
      List all process controls currently in place and being used. There are three categories of process controls:
      • Controls that reduce the likelihood of specific causes of failure modes occurring. Examples include but are not limited to:
        • 1.
          Operator training or certification.
        • 2.
          Written procedures and work instructions.
        • 3.
          Process checklists.
        • 4.
          Statistical process control (SPC).
      • Controls that detect failure modes prior to serious effects resulting. All types of in process inspection are the most commonly used detection controls. Examples include but are not limited to:
        • 1.
          Peer review of process decisions.
        • 2.
          Down stream process checks.
      • Controls that will moderate the severity of effects that could result from a failure mode. This category of control is typically difficult to execute in radiation therapy. The time between a failure mode occurring and the resulting, potentially very serious, effects is very short and damage is often inevitable once a failure mode occurs.
    • f.
      Judge the effectiveness of the current controls by
      • Defining the likelihood that a specific cause of a failure mode will occur.
      • Defining the probability that the current controls will detect the failure mode before any serious effects result.
      • Specify the seriousness of the effects resulting from the failure mode.
      • Use the TG-100 table (Table II) to assign values for occurrence of a cause, detection of a failure mode, and seriousness of effect
    • g.
      Calculate the RPN by multiplying the occurrence, detection, and severity rankings.
    • h.
      Identify and list new process controls that will improve the
      • Likelihood of preventing specific causes of failure modes from occurring, and
      • probability of detecting a failure mode before any serious effects occur.
    • i.
      Estimate the improvements resulting from the recommended actions in terms of:
      • Reducing the occurrence of the cause.
      • Improving the detection of the failure mode.
      • Calculate the new RPN by multiplying the estimated occurrence, detection rankings resulting from the recommended actions, and carry over severity rankings.
FIG. 11.

FIG. 11.

FMEA table.

3. Fault tree analysis

a. Exercise objectives

After this exercise, the team should be able to:

  • 1.

    Construct a fault tree from an FMEA, and

  • 2.

    perform basic FTA.

b. Exercise overview

In this exercise the team will complete an FTA for a step, Evaluate Plan from the FMEA constructed above.

Steps:

  • 1.

    Form a team; team composed of members who were involved in the development of the process tree and performing the FMEA for various steps of the process tree tend to produce a higher quality fault tree than a single individual.

  • 2.
    Performing the FTA:
    • a.
      The failure mode to the far left would be “Treatment Planning Failure.” There will be many more paths that will lead to that than the one the team will construct, but the team will not consider those now.
    • b.
      The next box to the right will begin the portion of the tree that the team will develop, and contain “Failure in Evaluating the Plan.” When multiple potential failures could lead to the failure independently, connect the causes on the right with the failure to the left with an or gate. If multiple causes have to happen simultaneously, connect them to the failure with an and gate.
    • c.
      Working to the right, from each box, continue to add boxes for potential causes that could directly lead to each failure.
    • d.
      Remember to stop a pathway when it reaches the end of the facility’s control.
    • e.
      As the team works on the fault tree, one may uncover potential failure modes that were overlooked on the FMEA. Add those to the tree.
    • f.
      Add the RPN and severity scores to the branches of the tree.

From this fault tree, the team will now be able to see how failures propagate and can potentially cause harm to the patient.

4. Quality management design

a. Exercise objectives

After this exercise, the team will:

  • 1.

    Understand how to address potential failures and causes of failure identified during FMEA and FTA, and

  • 2.

    understand how to establish a quality management program.

b. Exercise overview

In this exercise the team will address the potential failure modes and causes identified for the procedural step, Evaluate Plan from the FMEA and FTA created in the example exercise above. The team may also may need the process map created above in the example exercise.

Steps:

  • 1.

    Scan through the potential causes for failures on the right of the FTA. Identify those causes that might indicate inadequate resource allocation to perform the task. These should be addressed by recommendations to increase support. Be specific as to what resources would be requested.

  • 2.

    Identify those causes that result from the lack of any of the key core components (training, procedures and policies, and communications). Even if these are low scoring causes because, potentially, they may indicate a general problem in the facility that needs to be addressed. List recommended actions to mitigate the causes.

  • 3.

    Again, scan through the potential failure modes and causes, paying particular attention to those with the higher RPN or severity values. Is there any redesign of the process that would eliminate these potential failures or reduce their RPN values? (Nothing will reduce the severity.) If a redesign looks appropriate, would that lead to new potential failures or increase the RPN values for those previously identified?

  • 4.
    For the remaining potential failure modes and causes, begin with the box with the highest-ranking RPN value.
    • a.
      Would thorough commissioning eliminate this potential failure mode?
    • b.
      If not, at this point it must be addressed through quality management: quality assurance, quality control, or a combination of the two.
      • Most likely, quality control would be associated with the particular step, entering the fault tree as a “Failure in QC” in parallel with the cause and joined through an and gate going to the resulting potential failure mode.
      • Quality assurance would work downstream, after the cause. It is efficient to design QA such that it may cover several causes or potential failure modes.
    • Add the QM steps to the fault tree and:
      • Specify the recommended tool and methodology, and note the strength of the tool according to the ranking of the Institute for Safe Medical Practice, and
      • for QA steps, estimate how frequently the tests should be performed.
  • 5.

    Continue the exercise as in step 4 for the box with the next highest-ranking PRN value. Continue addressing boxes until the RPN and S values are so low that it would not be worth using resources to prevent their effects. However, make sure all potential failures with severity values of four or greater are addressed.

REFERENCES

  • 1.Klein E. E., Hanley J., Bayouth J., Yin F. F., Simon W., Dresser S., Serago C., Aguirre F., Ma L., Arjomandy B., Liu C., Sandin C., and Holmes T., “Task Group 142 report: Quality assurance of medical accelerators,” Med. Phys. 36, 4197–4212 (2009). 10.1118/1.3190392 [DOI] [PubMed] [Google Scholar]
  • 2.Kutcher G. J. et al. , “Comprehensive QA for radiation oncology: Report of AAPM Radiation Therapy Committee Task Group 40,” Med. Phys. 21, 581–618 (1994). 10.1118/1.597316 [DOI] [PubMed] [Google Scholar]
  • 3.Nath R., Anderson L. L., Luxton G., Weaver K. A., Williamson J. F., and Meigooni A. S., “Dosimetry of interstitial brachytherapy sources: Recommendations of the AAPM Radiation Therapy Committee Task Group No. 43. American Association of Physicists in Medicine,” Med. Phys. 22, 209–234 (1995). 10.1118/1.597458 [DOI] [PubMed] [Google Scholar]
  • 4.Fraass B., Doppke K., Hunt M., Kutcher G., Starkschall G., Stern R., and Van Dyke J., “American Association of Physicists in Medicine Radiation Therapy Committee Task Group 53: Quality assurance for clinical radiotherapy treatment planning,” Med. Phys. 25, 1773–1829 (1998). 10.1118/1.598373 [DOI] [PubMed] [Google Scholar]
  • 5.Nath R., Anderson L. L., Meli J. A., Olch A. J., Stitt J. A., and Williamson J. F., “Code of practice for brachytherapy physics: Report of the AAPM Radiation Therapy Committee Task Group No. 56. American Association of Physicists in Medicine,” Med. Phys. 24, 1557–1598 (1997). 10.1118/1.597966 [DOI] [PubMed] [Google Scholar]
  • 6.Almond P. R., Biggs P. J., Coursey B. M., Hanson W. F., Huq M. S., Nath R., and Rogers D. W., “AAPM’s TG-51 protocol for clinical reference dosimetry of high-energy photon and electron beams,” Med. Phys. 26, 1847–1870 (1999). 10.1118/1.598691 [DOI] [PubMed] [Google Scholar]
  • 7.Nath R., Biggs P. J., Bova F. J., Ling C. C., Purdy J. A., van de Geijn J., and Weinhous M. S., “AAPM code of practice for radiotherapy accelerators: Report of the AAPM Radiation Therapy Task Group No. 45,” Med. Phys. 21, 1093–1121 (1994). 10.1118/1.597398 [DOI] [PubMed] [Google Scholar]
  • 8.Quality and Safety in Radiotherapy: Learning the New Approaches in Task Group 100 and Beyond, edited by Thomadsen B. R., Dunscombe P., Ford E., Huq M. S., Pawlicki T., and Sutlief S. (Medical Physics Publishing, Madison, WI, 2013). [Google Scholar]
  • 9.Ford E. C., Gaudette R., Myers L., Vanderver B., Engineer L., Zellars R., Song D. Y., Wong J., and DeWeese T. L., “Evaluation of safety in a radiation oncology setting using failure mode and effects analysis,” Int. J. Radiat. Oncol., Biol., Phys. 74, 852–858 (2009). 10.1016/j.ijrobp.2008.10.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ford E. C., Smith K., Terezakis S., Croog V., Gollamudi S., Gage I., Keck J., DeWeese T., and Sibley G., “A streamlined failure mode and effects analysis,” Med. Phys. 41, 061709 (6pp.) (2014). 10.1118/1.4875687 [DOI] [PubMed] [Google Scholar]
  • 11.Sawant A., Dieterich S., Svatos M., and Keall P., “Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems,” Med. Phys. 37, 6466–6479 (2010). 10.1118/1.3517837 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Terezakis S. A., Pronovost P., Harris K., Deweese T., and Ford E., “Safety strategies in an academic radiation oncology department and recommendations for action,” Jt. Comm. J. Qual. Patient Saf. 37, 291–299 (2011). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Ford E. C., Smith K., Harris K., and Terezakis S., “Prevention of a wrong-location misadministration through the use of an intradepartmental incident learning system,” Med. Phys. 39, 6968–6971 (2012). 10.1118/1.4760774 [DOI] [PubMed] [Google Scholar]
  • 14.Broggi S., Cantone M. C., Chiara A., Di Muzio N., Longobardi B., Mangili P., and Veronese I., “Application of failure mode and effects analysis (FMEA) to pretreatment phases in tomotherapy,” J. Appl. Clin. Med. Phys. 14, 265–277 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Denny D. S., Allen D. K., Worthington N., and Gupta D., “The use of failure mode and effect analysis in a radiation oncology setting: The cancer treatment centers of America experience,” J. Healthcare Qual. 36, 18–28 (2014). 10.1111/j.1945-1474.2011.00199.x [DOI] [PubMed] [Google Scholar]
  • 16.Perks J. R., Stanic S., Stern R. L., Henk B., Nelson M. S., Harse R. D., Mathai M., Purdy J. A., Valicenti R. K., Siefkin A. D., and Chen A. M., “Failure mode and effect analysis for delivery of lung stereotactic body radiation therapy,” Int. J. Radiat. Oncol., Biol., Phys. 83, 1324–1329 (2012). 10.1016/j.ijrobp.2011.09.019 [DOI] [PubMed] [Google Scholar]
  • 17.Noel C. E., Santanam L., Parikh P. J., and Mutic S., “Process-based quality management for clinical implementation of adaptive radiotherapy,” Med. Phys. 41, 081717 (9pp.) (2014). 10.1118/1.4890589 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Scorsetti M., Signori C., Lattuada P., Urso G., Bignardi M., Navarria P., Castiglioni S., Mancosu P., and Trucco P., “Applying failure mode effects and criticality analysis in radiotherapy: Lessons learned and perspectives of enhancement,” Radiother. Oncol. 94, 367–374 (2010). 10.1016/j.radonc.2009.12.040 [DOI] [PubMed] [Google Scholar]
  • 19.Vlayen A., “Evaluation of time- and cost-saving modifications of HFMEA: An experimental approach in radiotherapy,” J. Patient. Saf. 7, 165–168 (2011). 10.1097/PTS.0b013e31822b07ee [DOI] [PubMed] [Google Scholar]
  • 20.Cantone M. C., Ciocca M., Dionisi F., Fossati P., Lorentini S., Krengli M., Molinelli S., Orecchia R., Schwarz M., Veronese I., and Vitolo V., “Application of failure mode and effects analysis to treatment planning in scanned proton beam radiotherapy,” Radiat. Oncol. 8:127 (2013). 10.1186/1748-717X-8-127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Yang F., Cao N., Young L., Howard J., Logan W., Arbuckle T., Sponseller P., Korssjoen T., Meyer J., and Ford E., “Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy,” Med. Phys. 42, 2777–2785 (2015). 10.1118/1.4919440 [DOI] [PubMed] [Google Scholar]
  • 22.Veronese I., De Martin E., Martinotti A. S., Fumagalli M. L., Vite C., Redaelli I., Malatesta T., Mancosu P., Beltramo G., Fariselli L., and Cantone M. C., “Multi-institutional application of failure mode and effects analysis (FMEA) to CyberKnife stereotactic body radiation therapy (SBRT),” Radiat. Oncol. 10:132 (2015). 10.1186/s13014-015-0438-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Manger R. P., Paxton A. B., Pawlicki T., and Kim G. Y., “Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery,” Med. Phys. 42, 2449–2461 (2015). 10.1118/1.4918319 [DOI] [PubMed] [Google Scholar]
  • 24.Younge K. C., Wang Y., Thompson J., Giovinazzo J., Finlay M., and Sankreacha R., “Practical implementation of failure mode and effects analysis for safety and efficiency in stereotactic radiosurgery,” Int. J. Radiat. Oncol., Biol., Phys. 91, 1003–1008 (2015). 10.1016/j.ijrobp.2014.12.033 [DOI] [PubMed] [Google Scholar]
  • 25.Jones R. T., Handsfield L., Read P. W., Wilson D. D., Van Ausdal R., Schlesinger D. J., Siebers J. V., and Chen Q., “Safety and feasibility of STAT RAD: Improvement of a novel rapid tomotherapy-based radiation therapy workflow by failure mode and effects analysis,” Pract. Radiat. Oncol. 5, 106–112 (2015). 10.1016/j.prro.2014.03.016 [DOI] [PubMed] [Google Scholar]
  • 26.Damato A. L., Lee L. J., Bhagwat M. S., Buzurovic I., Cormack R. A., Finucane S., Hansen J. L., O’Farrell D. A., Offiong A., Randall U., Friesen S., and Viswanathan A. N., “Redesign of process map to increase efficiency: Reducing procedure time in cervical cancer brachytherapy,” Brachytherapy 14, 471–480 (2015). 10.1016/j.brachy.2014.11.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Sayler E., Eldredge-Hindy H., Dinome J., Lockamy V., and Harrison A. S., “Clinical implementation and failure mode and effects analysis of HDR skin brachytherapy using Valencia and Leipzig surface applicators,” Brachytherapy 14, 293–299 (2015). 10.1016/j.brachy.2014.11.007 [DOI] [PubMed] [Google Scholar]
  • 28.Giardina M., Castiglia F., and Tomarchio E., “Risk assessment of component failure modes and human errors using a new FMECA approach: Application in the safety analysis of HDR brachytherapy,” J. Radiol. Prot. 34, 891–914 (2014). 10.1088/0952-4746/34/4/891 [DOI] [PubMed] [Google Scholar]
  • 29.Lopez-Tarjuelo J., Bouche-Babiloni A., Santos-Serra A., Morillo-Macias V., Calvo F. A., Kubyshin Y., and Ferrer-Albiach C., “Failure mode and effect analysis oriented to risk-reduction interventions in intraoperative electron radiation therapy: The specific impact of patient transportation, automation, and treatment planning availability,” Radiother. Oncol. 113, 283–289 (2014). 10.1016/j.radonc.2014.11.012 [DOI] [PubMed] [Google Scholar]
  • 30.Masini L., Donis L., Loi G., Mones E., Molina E., Bolchini C., and Krengli M., “Application of failure mode and effects analysis to intracranial stereotactic radiation surgery by linear accelerator,” Pract. Radiat. Oncol. 4, 392–397 (2014). 10.1016/j.prro.2014.01.006 [DOI] [PubMed] [Google Scholar]
  • 31.Joint Commission on Accreditation of Healthcare Organizations, Comprehensive Accreditation Manual for Hospitals: The Official Handbook, Standard LD, 5.2. ed., JCAHO, Oakbrook Terrace, IL, 2001. [PubMed]
  • 32.Joint Commission Perspectives on Patient Safety, Using FMEA to Assess and Reduce Risk JCAHO, Oakbrook Terrace, IL, 2001. [Google Scholar]
  • 33.Joint Commission on Accreditation of Healthcare Organizations, Failure Modes and Effects Analysis: Proactive Risk Reduction, JCAHO, Oakbrook Terrace, IL,2002. [Google Scholar]
  • 34.Thwaites D., Scalliet P., Leer J. W., and Overgaard J., “Quality assurance in radiotherapy. European Society for Therapeutic Radiology and Oncology Advisory Report to the Commission of the European Union for the ‘Europe against cancer programme,’” Radiother. Oncol. 35, 61–73 (1995). 10.1016/0167-8140(95)01549-V [DOI] [PubMed] [Google Scholar]
  • 35.European Society for Therapeutic Radiology and Oncology, Practical guidelines for the implementation of a quality system in radiotherapy: Physics for clinical radiotherapy, Booklet No. 4, ESTRO, Brussels,1998.
  • 36.International Atomic Energy Agency, Quality assurance in radiotherapy, IAEA-TECDOC-1040, IAEA, Vienna,1997.
  • 37.American College of Medical Physics, Radiation control and quality assurance in radiation oncology: A suggested protocol, ACMP Report Series No. 2, ACMP, Reston, VA,1986.
  • 38.International Electrotechnical Commission, Medical electrical equipment—Medical electron accelerators: Functional performance characteristics, IEC 976, IEC, Geneva,1989.
  • 39.International Electrotechnical Commission, Medical electrical equipment—Medical electron accelerators in the range 1 MeV–50 MeV: Guidelines for performance characteristics, IEC 977, IEC, Geneva,1989.
  • 40.Bogdanich W., Radiation offers new cures and new ways to do harm, New York Times,2010.
  • 41.Williamson J. F. and Thomadsen B. R., “Foreword. Symposium ‘quality assurance of radiation therapy: The challenges of advanced technologies,’” Int. J. Radiat. Oncol., Biol., Phys. 71, S1 (2008). 10.1016/j.ijrobp.2007.11.033 [DOI] [PubMed] [Google Scholar]
  • 42.The Royal College of Radiologists, Towards safer radiotherapy, Report No. BCFO(08)1, London,2008, https://www.rcr.ac.uk/docs/oncology/pdf/Towards_saferRT_final.pdf.
  • 43.World Health Organization, Radiotherapy Risk Profile—Technical Manual, Geneva,2008, http://www.who.int/patientsafety/activities/technical/radiotherapy_risk_profile.pdf.
  • 44.Ortiz López P., Cosset J. M., Dunscombe P., Holmberg O., Rosenwald J. C., Pinillos Ashton L., Vilaragut Llanes J. J., and Vatnitsky S., “A report of preventing accidental exposures from new external beam radiation therapy technologies,” ICRP Publication 112 (2009). [DOI] [PubMed]
  • 45.ROSIS, Radiation Oncology Safety Information System,2007, http://www.clin.radfys.lu.se.
  • 46.ICRU, “Determination of absorbed dose in a patient irradiated by beams of x- or gamma-rays in radiotherapy procedures,” ICRU Report 74 (International Commission on Radiation Units and Measurement, Bethesda, MD, 1976). [Google Scholar]
  • 47.Herring D. F. and Compton D. H. J., “The degree of precision in the radiation dose delivered in cancer radiotherapy,” Computers in Radiotherapy, Br. J. Radiol. Special Report No. 5, pp. 51–58 (1971).
  • 48.Juran J., “Quality,” in Juran’s Quality Control Handbook, edited by Juran F. G. J. M. (McGraw Hill, New York, NY, 1988), p. 2.6. [Google Scholar]
  • 49.IEC, International Electrotechnical Commission Standard 60601-1, Medical electrical equipment, Part 1—General requirements for basic safety and essential performance, IEC, Geneva,2005.
  • 50.Ford E. C., Fong de Los Santos L., Pawlicki T., Sutlief S., and Dunscombe P., “Consensus recommendations for incident learning database structures in radiation oncology,” Med. Phys. 39, 7272–7290 (2012). 10.1118/1.4764914 [DOI] [PubMed] [Google Scholar]
  • 51.Halvorsen P. H., Das I. J., Fraser M., Freedman D. J., Rice R. E. III, Ibbott G. S., Parsai E. I., T. T. Robin, Jr., and Thomadsen B. R., “AAPM Task Group 103 report on peer review in clinical radiation oncology physics,” J. Appl. Clin. Med. Phys. 6, 50–64 (2005). 10.1120/jacmp.2026.25362 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.VA National Center for Patient Safety, http://www.patientsafety.va.gov.
  • 53.Mutic S. and Brame S., in Error and Near Miss Reporting: A View from North America, edited by Pawlicki T., Dunscombe P., Mundt A. J., and Scalliet P. (Taylor & Francis, New York, NY, 2010), pp. 85–93. [Google Scholar]
  • 54.A Reference Guide For Learning From Incidents In Radiation Treatment (HTA Initiative Series #22), http://www.ihe.ca/publications/library/archived/a-reference-guide-for-learning-from-incidents-in-radiation-treatment.
  • 55.RO-ILS: Radiation Oncology Incident Learning System, https://www.astro.org/Clinical-Practice/Patient-Safety/ROILS/Intro.aspx.
  • 56.Center for the assesment of Radiological Sciences (CARS), Madison, WI, http://www.cars-pso.org.
  • 57.International Atomic Energy Agency (IAEA), Case studies in the application of probabilistic safety assessment techniques to radiation sources, Final report of a Coordinated Research Project, 2001-2003IAEA-TECDOC-1494, IAEA, Vienna,2006.
  • 58.International Atomic Energy Agency (IAEA), Safety Standards for Protecting People and the Environment, Safety of Radiation Generators and Sealed Radioactive Sources, Safety Guide No. RS-G-1.10, IAEA, Vienna,2006.
  • 59.International Atomic Energy Agency, International Atomic Energy Agency, Safety series: Lessons learned from accidental exposures in radiotherapy, 17, IAEA, Vienna, Austria.
  • 60.Pate-Cornell M. E., Lakats L. M., Murphy D. M., and Gaba D. M., “Anesthesia patient risk: A quantitative approach to organizational factors and risk management options,” Risk Anal. 17, 511–523 (1997). 10.1111/j.1539-6924.1997.tb00892.x [DOI] [PubMed] [Google Scholar]
  • 61.Sheridan-Leos N., Schulmeister L., and Hartranft S., “Failure mode and effect analysis: A technique to prevent chemotherapy errors,” Clin. J. Oncol. Nurs. 10, 393–398 (2006). 10.1188/06.cjon.393-398 [DOI] [PubMed] [Google Scholar]
  • 62.Duwe B., Fuchs B. D., and Hansen-Flaschen J., “Failure mode and effects analysis application to critical care medicine,” Crit. Care Clin. 21, 21–30 (2005). 10.1016/j.ccc.2004.07.005 [DOI] [PubMed] [Google Scholar]
  • 63.Wetterneck T. B., Skibinski K. A., Roberts T. L., Kleppin S. M., Schroeder M. E., Enloe M., Rough S. S., Hundt A. S., and Carayon P., “Using failure mode and effects analysis to plan implementation of smart i.v. pump technology,” Am. J. Health-Syst. Pharm. 63, 1528–1538 (2006). 10.2146/ajhp050515 [DOI] [PubMed] [Google Scholar]
  • 64.Palta J. R., Huq M. S., and Thomadsen B., “Application of risk analysis methods to IMRT quality management,” in Quality and safety in radiotherapy, Learning the new approaches in Task Group 100 and beyond, Medical Physics Monograph no. 36, edited by Thomadsen B., Dunscombe P., Ford E., Huq S., Pawlicki T., and Sutlief S. (2013), pp. 312–349. [Google Scholar]
  • 65.International Atomic Energy Agency (IAEA), Organization for Economic Co-operation and Develeopment—Nuclear Energy Agency, INES: The International Nuclear and Radiological Event Scale User’s Manual, 2008 Edition IAEA, Vienna, 2013. [Google Scholar]
  • 66.Automotive Industry Action Group, FMEA Manual, 4th ed. (AIAG, Southfield, MI, 2008). [Google Scholar]
  • 67.Institute for Safe Medical Practices (ISMP), Medication error prevention “toolbox,” in Medication Safety Alert,1999, http://www.ismp.org/msaarticles/toolbox.html.
  • 68.Ekaette E. U., Lee R. C., Cooke D. L., Kelly K. L., and Dunscombe P. B., “Risk analysis in radiation treatment: Application of a new taxonomic structure,” Radiother. Oncol. 80, 282–287 (2006). 10.1016/j.radonc.2006.07.004 [DOI] [PubMed] [Google Scholar]
  • 69.Langen K. M., Papanikolaou N., Balog J., Crilly R., Followill D., Goddu S. M., Grant W. III, Olivera G., Ramsey C. R., and Shi C., “QA for helical tomotherapy: Report of the AAPM Task Group 148,” Med. Phys. 37, 4817–4853 (2010). 10.1118/1.3462971 [DOI] [PubMed] [Google Scholar]
  • 70.Dieterich S., Cavedon C., Chuang C. F., Cohen A. B., Garrett J. A., Lee C. L., Lowenstein J. R., d’Souza M. F., D. D. Taylor, Jr., Wu X., and Yu C., “Report of AAPM TG 135: Quality assurance for robotic radiosurgery,” Med. Phys. 38, 2914–2936 (2011). 10.1118/1.3579139 [DOI] [PubMed] [Google Scholar]
  • 71.Benedict S. H., Yenice K. M., Followill D., Galvin J. M., Hinson W., Kavanagh B., Keall P., Lovelock M., Meeks S., Papiez L., Purdie T., Sadagopan R., Schell M. C., Salter B., Schlesinger D. J., Shiu A. S., Solberg T., Song D. Y., Stieber V., Timmerman R., Tome W. A., Verellen D., Wang L., and Yin F. F., “Stereotactic body radiation therapy: The report of AAPM Task Group 101,” Med. Phys. 37, 4078–4101 (2010). 10.1118/1.3438081 [DOI] [PubMed] [Google Scholar]
  • 72.Moran J. M., Dempsey M., Eisbruch A., Fraass B. A., Galvin J. M., Ibbott G. S., and Marks L. B., “Safety considerations for IMRT: Executive summary,” Med. Phys. 38, 5067–5072 (2011). 10.1118/1.3600524 [DOI] [PubMed] [Google Scholar]
  • 73.Soleberg T. D., Balter J. M., Benedict S. H., Fraass B. A., Kavanagh B., Miyamoto C., Pawlicki T., Potters L., and Yamada Y., “Quality and safety considerations in stereotactic radiosurgery and stereotactic body radiation therapy: Executive summary,” Pract. Radiat. Oncol. 2, 2–9 (2012). 10.1016/j.prro.2011.06.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Jaffray D. A., Langen K. M., Mageras G., Dawson L. A., Yan D., Adams R., Mundt A. J., and Fraass B. A., “Safety considerations for IGRT: Executive summary,” Pract. Radiat. Oncol. 3, 167–170 (2013). 10.1016/j.prro.2013.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Thomadsen B. R., Erickson B. A., Eifel P. J., Chow Hsu I., Patel R. R., Petereit D. G., Fraass B. A., and Rivard M. J., “A review of safety, quality management, and practice guidelines for high-dose-rate brachytherapy: Executive summary,” Pract. Radiat. Oncol. 4, 65–70 (2014). 10.1016/j.prro.2013.12.005 [DOI] [PubMed] [Google Scholar]
  • 76.Marks L. B., Adams R. A., Pawlicki T., Blumberg A. L., Hoopes D., Brundage M. D., and Fraass B. A., “Enhancing the role of case-oriented peer review to improve quality and safety in radiation oncology: Executive summary,” Pract. Radiat. Oncol. 3, 149–156 (2013). 10.1016/j.prro.2012.11.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Fong de Los Santos L. E., Evans S., Ford E. C., Gaiser J. E., Hayden S. E., Huffman K. E., Johnson J. L., Mechalakos J. G., Stern R. L., Terezakis S., Thomadsen B. R., Pronovost P. J., and Fairobent L. A., “Medical Physics Practice Guideline 4.a: Development, implementation, use and maintenance of safety checklists,” J. Appl. Clin. Med. Phys. 16, 37–59 (2015). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Safety is No Accident: A Framework for Quality Radiation Oncology and Care, ASTRO, Fairfax, VA,2012, https://www.astro.org/uploadedFiles/Main_Site/Clinical_Practice/Patient_Safety/Blue_Book/SafetyisnoAccident.pdf.
  • 79.Marks L. B., Rose C. M., Hayman J. A., and Williams T. R., “The need for physician leadership in creating a culture of safety,” Int. J. Radiat. Oncol., Biol., Phys. 79, 1287–1289 (2011). 10.1016/j.ijrobp.2010.12.004 [DOI] [PubMed] [Google Scholar]
  • 80.Williamson J. F., Dunscombe P. B., Sharpe M. B., Thomadsen B. R., Purdy J. A., and Deye J. A., “Quality assurance needs for modern image-based radiotherapy: Recommendations from 2007 interorganizational symposium on ‘Quality assurance of radiation therapy: Challenges of advanced technology,’” Int. J. Radiat. Oncol., Biol., Phys. 71, S2–S12 (2008). 10.1016/j.ijrobp.2007.08.080 [DOI] [PubMed] [Google Scholar]
  • 81.Kubo H. D., Glasgow G. P., Pethel T. D., Thomadsen B. R., and Williamson J. F., “High dose-rate brachytherapy treatment delivery: Report of the AAPM Radiation Therapy Committee Task Group No. 59,” Med. Phys. 25, 375–403 (1998). 10.1118/1.598232 [DOI] [PubMed] [Google Scholar]
  • 82.Pawlicki T., Dunscombe P. B., Mundt A. J., and Scalliet P., Quality and Safety in Radiotherapy (Taylor & Francis, New York, NY, 2010). [Google Scholar]
  • 83.Purdy J. A. et al. , “Medical accelerator safety considerations: Report of AAPM Radiation Therapy Committee Task Group No. 35,” Med. Phys. 20, 1261–1275 (1993). 10.1118/1.596977 [DOI] [PubMed] [Google Scholar]
  • 84.Mageras G. S., Kutcher G. J., Leibel S. A., Zelefsky M. J., Melian E., Mohan R., and Fuks Z., “A method of incorporating organ motion uncertainties into three-dimensional conformal treatment plans,” Int. J. Radiat. Oncol., Biol., Phys. 35, 333–342 (1996). 10.1016/0360-3016(96)00008-9 [DOI] [PubMed] [Google Scholar]
  • 85.Dunscombe P. B., Iftody S., Ploquin N., Ekaette E. U., and Lee R. C., “The equivalent uniform dose as a severity metric for radiation treatment incidents,” Radiother. Oncol. 84, 64–66 (2007). 10.1016/j.radonc.2007.05.024 [DOI] [PubMed] [Google Scholar]
  • 86.Rangel A., Ploquin N., Kay I., and Dunscombe P., “Towards an objective evaluation of tolerances for beam modeling in a treatment planning system,” Phys. Med. Biol. 52, 6011–6025 (2007). 10.1088/0031-9155/52/19/020 [DOI] [PubMed] [Google Scholar]
  • 87.Low D. A., Moran J. M., Dempsey J. F., Dong L., and Oldham M., “Dosimetry tools and techniques for IMRT,” Med. Phys. 38, 1313–1338 (2011). 10.1118/1.3514120 [DOI] [PubMed] [Google Scholar]
  • 88.Mahan S. L., Chase D. J., and Ramsey C. R., “Technical Note: Output and energy fluctuations of the tomotherapy Hi-Art helical tomotherapy system,” Med. Phys. 31, 2119–2120 (2004). 10.1118/1.1763007 [DOI] [PubMed] [Google Scholar]
  • 89.Pawlicki T., Yoo S., Court L. E., McMillan S. K., Rice R. K., Russell J. D., Pacyniak J. M., Woo M. K., Basran P. S., Shoales J., and Boyer A. L., “Moving from IMRT QA measurements toward independent computer calculations using control charts,” Radiother. Oncol. 89, 330–337 (2008). 10.1016/j.radonc.2008.07.002 [DOI] [PubMed] [Google Scholar]
  • 90.Pawlicki T., Whitaker M., and Boyer A. L., “Statistical process control for radiotherapy quality assurance,” Med. Phys. 32, 2777–2786 (2005). 10.1118/1.2001209 [DOI] [PubMed] [Google Scholar]
  • 91.Thomadsen B., Lin S. W., Laemmrich P., Waller T., Cheng A., Caldwell B., Rankin R., and Stitt J., “Analysis of treatment delivery errors in brachytherapy using formal risk analysis techniques,” Int. J. Radiat. Oncol., Biol., Phys. 57, 1492–1508 (2003). 10.1016/S0360-3016(03)01622-5 [DOI] [PubMed] [Google Scholar]
  • 92.Ortiz López P., “Tools for risk assessment in radiation therapy,” Ann. ICRP 41, 197–207 (2012). 10.1016/j.icrp.2012.06.025 [DOI] [PubMed] [Google Scholar]
  • 93.Clark B. G., Brown R. J., Ploquin J. L., Kind A. L., and Grimard L., “The management of radiation treatment error through incident learning,” Radiother. Oncol. 95, 344–349 (2010). 10.1016/j.radonc.2010.03.022 [DOI] [PubMed] [Google Scholar]
  • 94.Yeung T. K., Bortolotto K., Cosby S., Hoar M., and Lederer E., “Quality assurance in radiotherapy: Evaluation of errors and incidents recorded over a 10 year period,” Radiother. Oncol. 74, 283–291 (2005). 10.1016/j.radonc.2004.12.003 [DOI] [PubMed] [Google Scholar]
  • 95.Barthelemy-Brichant N., Sabatier J., Dewe W., Albert A., and Deneufbourg J. M., “Evaluation of frequency and type of errors detected by a computerized record and verify system during radiation treatment,” Radiother. Oncol. 53, 149–154 (1999). 10.1016/S0167-8140(99)00141-3 [DOI] [PubMed] [Google Scholar]
  • 96.Fraass B. A., Lash K. L., Matrone G. M., Volkman S. K., McShan D. L., Kessler M. L., and Lichter A. S., “The impact of treatment complexity and computer-control delivery technology on treatment delivery errors,” Int. J. Radiat. Oncol., Biol., Phys. 42, 651–659 (1998). 10.1016/S0360-3016(98)00244-2 [DOI] [PubMed] [Google Scholar]
  • 97.Ekaette E., Lee R. C., Cooke D. L., Iftody S., and Craighead P., “Probabilistic fault tree analysis of a radiation treatment system,” Risk Anal. 27, 1395–1410 (2007). 10.1111/j.1539-6924.2007.00976.x [DOI] [PubMed] [Google Scholar]
  • 98.Dunscombe P. B., Ekaette E. U., Lee R. C., and Cooke D. L., “Taxonometric applications in radiotherapy incident analysis,” Int. J. Radiat. Oncol., Biol., Phys. 71, S200–S203 (2008). 10.1016/j.ijrobp.2007.06.085 [DOI] [PubMed] [Google Scholar]
  • 99.Logan T. J., “Error prevention as developed in airlines,” Int. J. Radiat. Oncol., Biol., Phys. 71, S178–S181 (2008). 10.1016/j.ijrobp.2007.09.040 [DOI] [PubMed] [Google Scholar]
  • 100.Ciocca M., Cantone M. C., Veronese I., Cattani F., Pedroli G., Molinelli S., Vitolo V., and Orecchia R., “Application of failure mode and effects analysis to intraoperative radiation therapy using mobile electron linear accelerators,” Int. J. Radiat. Oncol., Biol., Phys. 82, e305–e311 (2012). 10.1016/j.ijrobp.2011.05.010 [DOI] [PubMed] [Google Scholar]
  • 101.Novak P., Moros E. G., Straube W. L., and Myerson R. J., “Treatment delivery software for a new clinical grade ultrasound system for thermoradiotherapy,” Med. Phys. 32, 3246–3256 (2005). 10.1118/1.2064848 [DOI] [PubMed] [Google Scholar]
  • 102.Israelski E. W. and Muto W. H., “Human factors risk management as a way to improve medical device safety: A case study of the therac 25 radiation therapy system,” Jt. Comm. J. Qual. Patient Saf. 30, 689–695 (2004). [DOI] [PubMed] [Google Scholar]
  • 103.Clark B. G., Brown R. J., Ploquin J., and Dunscombe P., “Patient safety improvements in radiation treatment through 5 years of incident learning,” Pract. Radiat. Oncol. 3, 157–163 (2013). 10.1016/j.prro.2012.08.001 [DOI] [PubMed] [Google Scholar]
  • 104.NPSF, Patient Safety Dictionary, http://www.npsf.org/?page=dictionaryae&terms=.
  • 105.D’Souza N., Holden L., Robson S., Mah K., Di Prospero L., Wong C. S., Chow E., and Spayne J., “Modern palliative radiation treatment: Do complexity and workload contribute to medical errors?,” Int. J. Radiat. Oncol., Biol., Phys. 84, e43–e48 (2012). 10.1016/j.ijrobp.2012.02.026 [DOI] [PubMed] [Google Scholar]
  • 106.Margalit D. N., Chen Y. H., Catalano P. J., Heckman K., Vivenzio T., Nissen K., Wolfsberger L. D., Cormack R. A., Mauch P., and Ng A. K., “Technological advancements and error rates in radiation therapy delivery,” Int. J. Radiat. Oncol., Biol., Phys. 81, e673–e679 (2011). 10.1016/j.ijrobp.2011.04.036 [DOI] [PubMed] [Google Scholar]
  • 107.Arnold A., Delaney G. P., Cassapi L., and Barton M., “The use of categorized time-trend reporting of radiation oncology incidents: A proactive analytical approach to improving quality and safety over time,” Int. J. Radiat. Oncol., Biol., Phys. 78, 1548–1554 (2010). 10.1016/j.ijrobp.2010.02.029 [DOI] [PubMed] [Google Scholar]
  • 108.Bissonnette J. P. and Medlam G., “Trend analysis of radiation therapy incidents over seven years,” Radiother. Oncol. 96, 139–144 (2010). 10.1016/j.radonc.2010.05.002 [DOI] [PubMed] [Google Scholar]
  • 109.Huang G., Medlam G., Lee J., Billingsley S., Bissonnette J. P., Ringash J., Kane G., and Hodgson D. C., “Error in the delivery of radiation therapy: Results of a quality assurance review,” Int. J. Radiat. Oncol., Biol., Phys. 61, 1590–1595 (2005). 10.1016/j.ijrobp.2004.10.017 [DOI] [PubMed] [Google Scholar]
  • 110.Mutic S., Brame R. S., Oddiraju S., Parikh P., Westfall M. A., Hopkins M. L., Medina A. D., Danieley J. C., Michalski J. M., El Naqa I. M., Low D. A., and Wu B., “Event (error and near-miss) reporting and learning system for process improvement in radiation oncology,” Med. Phys. 37, 5027–5036 (2010). 10.1118/1.3471377 [DOI] [PubMed] [Google Scholar]
  • 111.Marks L. B., Light K. L., Hubbs J. L., Georgas D. L., Jones E. L., Wright M. C., Willett C. G., and Yin F. F., “The impact of advanced technologies on treatment deviations in radiation treatment delivery,” Int. J. Radiat. Oncol., Biol., Phys. 69, 1579–1586 (2007). 10.1016/j.ijrobp.2007.08.017 [DOI] [PubMed] [Google Scholar]
  • 112.Macklis R. M., Meier T., and Weinhous M. S., “Error rates in clinical radiotherapy,” J. Clin. Oncol. 16, 551–556 (1998). [DOI] [PubMed] [Google Scholar]
  • 113.Calandrino R., Cattaneo G. M., Fiorino C., Longobardi B., Mangili P., and Signorotto P., “Detection of systematic errors in external radiotherapy before treatment delivery,” Radiother. Oncol. 45, 271–274 (1997). 10.1016/S0167-8140(97)00095-9 [DOI] [PubMed] [Google Scholar]
  • 114.Abt Study of Medical Physicist Work Values for Radiation Oncology Physics Services: Round II (Final Report) College Park, MD, 2003, http://aapm.org/pubs/reports/ABTReport.pdf. [DOI] [PubMed]
  • 115.Klein E. E., “A grid to facilitate physics staffing justification,” J. Appl. Clin. Med. Phys. 11, 263–273 (2010). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.American College of Radiology (ACR), Radiation Oncology Accreditation Program Requirements, ACR, Reston, VA,2012, http://www.acr.org/~/media/ACR/Documents/Accreditation/RO/Requirements.pdf.
  • 117.Chera B. S., Jackson M., Mazur L. M., Adams R., Chang S., Deschesne K., Cullip T., and Marks L. B., “Improving quality of patient care by improving daily practice in radiation oncology,” Sem. Radiat. Oncol. 22, 77–85 (2012). 10.1016/j.semradonc.2011.09.002 [DOI] [PubMed] [Google Scholar]
  • 118.Lee W. R., Roach M. III, Michalski J., Moran B., and Beyer D., “Interobserver variability leads to significant differences in quantifiers of prostate implant adequacy,” Int. J. Radiat. Oncol., Biol., Phys. 54, 457–461 (2002). 10.1016/S0360-3016(02)02950-4 [DOI] [PubMed] [Google Scholar]
  • 119.Gregoire V., Levendag P., Ang K. K., Bernier J., Braaksma M., Budach V., Chao C., Coche E., Cooper J. S., Cosnard G., Eisbruch A., El-Sayed S., Emami B., Grau C., Hamoir M., Lee N., Maingon P., Muller K., and Reychler H., “CT-based delineation of lymph node levels and related CTVs in the node-negative neck: DAHANCA, EORTC, GORTEC, NCIC, RTOG consensus guidelines,” Radiother. Oncol. 69, 227–236 (2003). 10.1016/j.radonc.2003.09.011 [DOI] [PubMed] [Google Scholar]
  • 120.Kaus M. R., Brock K. K., Pekar V., Dawson L. A., Nichol A. M., and Jaffray D. A., “Assessment of a model-based deformable image registration approach for radiation therapy planning,” Int. J. Radiat. Oncol., Biol., Phys. 68, 572–580 (2007). 10.1016/j.ijrobp.2007.01.056 [DOI] [PubMed] [Google Scholar]
  • 121.Kapanen M., Tenhunen M., Parkkinen R., Sipila P., and Jarvinen H., “The influence of output measurement time interval and tolerance on treatment dose deviation in photon external beam radiotherapy,” Phys. Med. Biol. 51, 4857–4867 (2006). 10.1088/0031-9155/51/19/009 [DOI] [PubMed] [Google Scholar]
  • 122.Constantinou C. and Sternick E. S., “Reduction of the ‘horns’ observed on the beam profiles of a 6-MV linear accelerator,” Med. Phys. 11, 840–842 (1984). 10.1118/1.595572 [DOI] [PubMed] [Google Scholar]
  • 123.Rangel A. and Dunscombe P., “Tolerances on MLC leaf position accuracy for IMRT delivery with a dynamic MLC,” Med. Phys. 36, 3304–3309 (2009). 10.1118/1.3134244 [DOI] [PubMed] [Google Scholar]
  • 124.Bayouth J. E., “Siemens multileaf collimator characterization and quality assurance approaches for intensity-modulated radiotherapy,” Int. J. Radiat. Oncol., Biol., Phys. 71, S93–S97 (2008). 10.1016/j.ijrobp.2007.07.2394 [DOI] [PubMed] [Google Scholar]
  • 125.Liu C., Simon T. A., Fox C., Li J., and Palta J. R., “Multileaf collimator characteristics and reliability requirements for IMRT Elekta system,” Int. J. Radiat. Oncol., Biol., Phys. 71, S89–S92 (2008). 10.1016/j.ijrobp.2007.07.2392 [DOI] [PubMed] [Google Scholar]
  • 126.LoSasso T., “IMRT delivery system QA,” in Intensity Modulated Radiation Therapy: The State of the Art, Medical Physics Monograph Vol. 29, edited by Palta J. and Mackie T. R. (Medical Physics Publishing, Madison, WI, 2001), pp. 561–591. [Google Scholar]
  • 127.LoSasso T., Chui C.-S., and Ling C. C., “Comprehensive quality assurance for the delivery of intensity modulated radiotherapy with a multileaf collimator used in the dynamic mode,” Med. Phys. 28, 2209–2219 (2001). 10.1118/1.1410123 [DOI] [PubMed] [Google Scholar]
  • 128.Mu G., Ludlum E., and Xia P., “Impact of MLC leaf position errors on simple and complex IMRT plans for head and neck cancer,” Phys. Med. Biol. 53, 77–88 (2008). 10.1088/0031-9155/53/1/005 [DOI] [PubMed] [Google Scholar]
  • 129.Keall P. J., Mageras G. S., Balter J. M., Emery R. S., Forster K. M., Jiang S. B., Kapatoes J. M., Low D. A., Murphy M. J., Murray B. R., Ramsey C. R., Van Herk M. B., Vedam S. S., Wong J. W., and Yorke E., “The management of respiratory motion in Radiation Oncology report of AAPM Task Group 76,” Med. Phys. 33, 3874–3900 (2006). 10.1118/1.2349696 [DOI] [PubMed] [Google Scholar]
  • 130.Luo W., Li J., R. A. Price, Jr., Chen L., Yang J., Fan J., Chen Z., McNeeley S., Xu X., and Ma C. M., “Monte Carlo based IMRT dose verification using MLC log files and R/V outputs,” Med. Phys. 33, 2557–2564 (2006). 10.1118/1.2208916 [DOI] [PubMed] [Google Scholar]
  • 131.Litzenberg D. W., Moran J. M., and Fraass B. A., “Verification of dynamic and segmental IMRT delivery by dynamic log file analysis,” J. Appl. Clin. Med. Phys. 3, 63–72 (2002). 10.1120/1.1449362 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Prabhakar R., Cramb J., and Kron T., “A feasibility study of using couch-based real time dosimetric device in external beam radiotherapy,” Med. Phys. 38, 6539–6552 (2011). 10.1118/1.3660773 [DOI] [PubMed] [Google Scholar]
  • 133.Mans A., Wendling M., Mcdermott L. N., Sonke J. J., Tielenburg R., Vijlbrief R., Mijnheer B., van Herk M., and Stroom J. C., “Catching errors with in vivo EPID dosimetry,” Med. Phys. 37, 2638–2644 (2010). 10.1118/1.3397807 [DOI] [PubMed] [Google Scholar]
  • 134.Stell A. M., Li J. G., Zeidan O. A., and Dempsey J. F., “An extensive log-file analysis of step-and-shoot intensity modulated radiation therapy segment delivery errors,” Med. Phys. 31, 1593–1602 (2004). 10.1118/1.1751011 [DOI] [PubMed] [Google Scholar]
  • 135.Sabet M., Rowshanfarzad P., Vial P., Menk F. W., and Greer P. B., “Transit dosimetry in IMRT with an a-Si EPID in direct detection configuration,” Phys. Med. Biol. 57, N295–N306 (2012). 10.1088/0031-9155/57/15/N295 [DOI] [PubMed] [Google Scholar]
  • 136.European Society for Therapeutic Radiology and Oncology, Guidelines for the Verification of IMRT: Booklet 9, ESTRO, Brussels,2008.
  • 137.Hartford A. C., Palisca M. G., Eichler T. J., Beyer D. C., Devineni V. R., Ibbott G. S., Kavanagh B., Kent J. S., Rosenthal S. A., Schultz C. J., Tripuraneni P., and Gaspar L. E., “American Society for Therapeutic Radiology and Oncology (ASTRO) and American College of Radiology (ACR) practice guidelines for intensity-modulated radiation therapy (IMRT),” Int. J. Radiat. Oncol., Biol., Phys. 73, 9–14 (2009). 10.1016/j.ijrobp.2008.04.049 [DOI] [PubMed] [Google Scholar]
  • 138.American College of Radiology, ACR—ASTRO Practice Guideline for Intensity Modulated Radiation Therapy (IMRT), ACR,2011. http://www.acr.org/Quality-Safety/Standards-Guidelines/Practice-Guidelines-by-Modality/Radiation-Oncology. [DOI] [PubMed]
  • 139.Ezzell G. A., Galvin J. M., Low D., Palta J. R., Rosen I., Sharpe M. B., Xia P., Xiao Y., Xing L., Yu C. X., IMRT Subcommittee, and AAPM Radiation Therapy Committee, “Guidance document on delivery, treatment planning, and clinical implementation of IMRT: Report of the IMRT Subcommittee of the AAPM Radiation Therapy Committee,” Med. Phys. 30, 2089–2115 (2003). 10.1118/1.1591194 [DOI] [PubMed] [Google Scholar]
  • 140.Svensson G. K., Baily N. A., Loevinger R., and Morton R. J., Physical Aspects of Quality Assurance in Radiation Therapy (American Association of Physicists in Medicine, American Institute of Physics, New York, NY, 1984). [Google Scholar]
  • 141.See supplementary material at http://dx.doi.org/10.1118/1.4947547 E-MPHYA6-43-069605 for Appendixes C1–C3 and D–G.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Citations

  1. See supplementary material at http://dx.doi.org/10.1118/1.4947547 E-MPHYA6-43-069605 for Appendixes C1–C3 and D–G.

Articles from Medical Physics are provided here courtesy of American Association of Physicists in Medicine

RESOURCES