Introduction
Two Quality Control (QC) workshops were conducted in 2013 and 2014 as satellite meetings associated with the Annual Scientific Conference of the Australasian Association of Clinical Biochemists. The purpose of these Workshops was to harmonise the approaches laboratories were taking to developing and implementing a laboratory QC policy. This document represents the collective views from those workshops and is intended to produce guidelines that would be useful as a starting point to harmonise laboratory QC policies under a common framework. One further implication of a harmonised approach to QC policies could be better identification and control of assays with intermediate imprecision and a reduction in the measurement uncertainty of those assays. The key areas of these policies are as follows:
Setting of quality standards
Selection of materials
Selection of concentrations
Setting (and re-setting) of QC targets
Setting (and re-setting) of QC limits
Selection of rules
Frequency of running QC
Response to out of range results
Other supporting QC activities (e.g. Average of Normals)
1. Setting of Quality Standards
The goal of the QC process is ensure the quality of patient results, by minimising the risk of issuing erroneous results that may lead to patient harm. The quality limits are set for each analyte to achieve this goal. Without selecting performance goals, it is impossible to determine the critical shift (measured in standard deviations (SD)) that the QC algorithm must detect and hence ensure that the selected QC algorithm is appropriate. Performance goals should be set based on the analyte and the clinical situation where it is used, and should be based on evidence.6–8 It is suggested that an integrated approach is used to set a target imprecision, based on a comparison of the laboratories that achieved analytical imprecision, optimal imprecision (a standard fraction of allowable performance), achievable imprecision (based on the instrument manufacturers’ assay precision specifications) and the percentage of RCPA QAP3 laboratories that achieved optimal imprecision i.e. The Royal College of Pathologists of Australasia Quality Assurance Programs (RCPA QAP) 20th and 50th percentile coefficients of variation (CVs).
Performance goals are used to:
Guide the setting of target imprecision (where easily achievable)
Determine how well specific performance goals are being met
Clarify responsibility for improving incapable tests.
Guide selection of ‘agreed’ analyte imprecision for target SD
Determine the significance of a QC failure e.g. release of patient results.
Traditionally, reliability was improved by using an error budget where the two components of error lie within the allowable limit of performance; normal imprecision, plus the size of a critical error detected by a QC algorithm with high probability. By using such a technique, essentially, when a QC run fails, patient results are still reportable. The statistical basis of the performance goal is that it is calculated to have a 90% chance of detecting an error that would cause 5% or more of reported patient results to be outside the allowable limit of performance.
Definitions
| Capability of Assay1 | Assays are classified as capable based on the sigma metric: | |||
| Sigma | Capability | |||
| Less than 3 | Incapable | |||
| Greater than or equal to3 | Capable | |||
| Greater than or equal to 6 | Highly capable | |||
|
| ||||
| Six Sigma2 | The number of standard deviations (SD) spanning the product specifications on the defect rate and defects per million | |||
| SD range | Defect Rate (%) | Defects/Million | ||
| ± 2 SD | 4.5 | 45,400 | ||
| ± 3 SD | < 0.27 | <2,700 | ||
| ± 4 SD | 0.0063 | 63 | ||
| ± 5 SD | 0.0057 | 0.57 | ||
| ± 6 SD | 0.000002 | 0.002 | ||
|
| ||||
| Sigma Metric or Assay Capability (Cps)1,2 |
Where
|
|||
|
| ||||
| Systematic Error (4) |
|
|||
|
| ||||
| Random Error4 |
|
|||
|
| ||||
| Structure of QC Rules | QC rules of the form µi ± k*SDi Where µi is the target mean, k is a variable and SD is the target SD.
QC rules of the form µu ± f*TEA Where f is some proportion (%) and TEA is the total error allowable or allowable limit of performance.
|
|||
|
| ||||
| Westgard Rules5 | QC rules often used for evaluation of QC data. | |||
S1.1 Every laboratory should have quality goals for each assay
C1.1 As the RCPA Allowable Limits of Performance (ALP)3 are generally based on biological variability goals, laboratories may use these limits as quality limits to determine assay capability (1), and to assess fitness for purpose.
However, laboratories using the RCPA limits for these purposes are cautioned that the RCPA allowable limits do not equate to a ‘standard’ biological variability metric (e.g. Biological Variability Total Allowable Error (TEA), Desirable). This means that for some analytes the ALP equates to biological variability total error, for others minimum biological variability precision goals, for others desirable biological variability precision goals and so on. The AACB Working Party for Allowable Limits3 deemed it necessary to adopt ALPs based on varied biological variability goals across analytes so as not to unfairly penalise laboratories where current methodologies do not permit attainment of a fixed biological variability standard across all analytes. However, in using a stratified or variable approach to biological variability goals the clear link to having goals based on clinical needs but also attuned to current state-of-the-art instrument/method capabilities has been maintained.
2. Selection of Quality Control Materials
QC material is critical to the QC system.4 The selection of an appropriate matrix and vial size and the accurate preparation and effective storage of the material will heavily influence the efficacy of the QC process.
The key considerations in QC material selection and use are:
Matrix
Vial size
Expiry date
Number of levels
Unassayed vs assayed
Source
S2.1. The QC sample matrix should match the matrix of the patient sample being tested
C2.1 For commutability reasons, it is generally desirable to match the matrix of the QC used, to the matrix of the patient sample type being measured e.g. if measuring analytes in human serum, then human serum-based QC material should be used. Recognised exceptions are:
Blood gas QC – an aqueous matrix is more practical - so this is generally acceptable
Cerebrospinal fluid (CSF) – because of the low protein concentrations found in CSF, urine QC (which has similarly low protein concentrations as CSF, and comparable glucose concentrations) is an acceptable substitute.
Laboratories are also cautioned that it is inevitable that QC material will be supplemented in order to obtain abnormal levels. It is therefore recommended that laboratories use QC materials whose supplementation is from human sources in preference to non-human sources. Non-human sources may include enzyme supplementation from bovine, equine or porcine sources. Enzymes from such sources may have different KMs to their human enzyme equivalents; therefore, using QC materials with such supplementation may lead to shifts detected that do not affect patient samples or worse still, may lead to a shift affecting patient samples going unnoticed due to QC material non-commutability. Specific caution should also be exercised when a QC material has been supplemented with placental alkaline phosphatase rather than bone or liver alkaline phosphatase.
C2.2 Lyophilised material generally has a longer shelf life and is generally cheaper on a cost/mL basis than liquid-stable material. However, liquid-stable QC is the most desirable form of QC to purchase as it does not require reconstitution and hence eliminates reconstitution as a potential source of error.
A further consideration is that liquid-stable QC may require storage at temperatures of −20°C or below, which may not be available in the laboratory or may not be available in sufficient storage space volume. The risk of material spoilage is also greater for liquid frozen QC e.g. if a freezer loses its temperature and the material undergoes a freeze-thaw.
C2.3 Handling QC materials is crucial to analytical performance. Staff should adhere to validated and authorised procedures for preparation, use and storage of QC materials. Freezing QC materials may significantly change performance characteristics and cause significant shifts in QC values.
C2.4 QC Material should be chosen with the longest possible expiry date available. This offers the greatest ability to monitor assay performance for the longest possible interval without a QC lot change occurring. This also means that a laboratory would not have to go through the costly and time consuming exercise of QC material evaluation and target and limit setting as often.
C2.5 When considering vial size, laboratories need to balance two main issues – how much QC material volume they require daily versus the open-vial stability of the material. In general, the material needs to be fully consumed before the QC open-vial stability is exceeded, otherwise:
○ Remaining QC vial volume would need to be discarded adding to the cost of material; or,
○ If the laboratory continued to use the material beyond its open vial stability, some or all of the analyte concentrations would deteriorate and would lead to un-necessary troubleshooting.
- Further precautions are extended to laboratories that choose to re-aliquot and freeze some of their QC materials (e.g. due to low volume use). Such laboratories should:
- ○ Assign a QC target to the ‘frozen re-aliquoted QC’ and not to the QC in its primary form
- ○ Always assure that the QC is used consistently i.e. always use a re-aliquoted QC vial and not a mixture of the re-aliquoted QC on some days and freshly prepared QC in its primary form on other days.
S2.2. A laboratory needs to ensure that QC materials have concentrations that are within the boundaries of their method’s measuring range, without dilution
- There should be at least two distinct QC sample concentrations:
- ○ One that is within the patient reference interval for the given analyte
- ○ One that is at an important clinical cut-off for the analyte (if available/applicable), or
- ○ One that is within the pathological range for the analyte – either higher or lower, depending on the analyte.
C2.6 The number of QC levels required/recommended depends on the type of analyte-class that the laboratory measures.
For general chemistry analytes e.g. electrolytes, liver function tests (LFT), the use of two QC levels is deemed acceptable.
For immunoassays e.g. testosterone, alpha-foetoprotein (AFP), the use of three QC levels (low, mid-level, high) is desirable, as it is acknowledged that these assays generally require a wider measuring range.
For therapeutic drugs, the use of three QC levels is desirable, so that all three drug concentration ranges are covered (therapeutic, sub-therapeutic and toxic).
- For some analytes e.g. thyroid-stimulating hormone (TSH), oestradiol, prostate-specific antigen (PSA) – it was considered that a fourth level of QC is desirable, so that assay performances at concentrations specific to certain patient diagnostic groups could be specifically assessed and monitored:
- ○ PSA – an ultra-low PSA QC with a level of 0.02 µg/L could monitor PSA in the post radical prostatectomy range
- ○ Oestradiol – an ultra-low oestradiol QC with a level of about 50 nmol/L could assess assay performance in the ovulation range, for patients undergoing in vitro fertilisation.
C2.7 Laboratories may choose to use supplied assayed or unassayed QC material. The choice is up to the laboratory but assayed QC materials do provide laboratories with a reasonable target source to start with and an ability to cross-check their accuracy against their relevant peer groups.
C2.8 It is recommended that third party QC should be used wherever possible and practical, over QC that is produced by the method manufacturer. However, for a long time no viable third party QC alternatives were available for some analytes e.g. adrenocorticotrophic hormone (ACTH), C-Peptide, sex hormone binding globulin (SHBG), homocysteine and holotranscobalamin. However, this issue is slowly changing with new QC products coming onto the market; hence laboratories should keep abreast of these changes.
Laboratories are also cautioned that QCs manufactured by the manufacturer of their method may not have a sufficiently complex matrix to rigorously assess that performance of their assay.
If a manufacturer-sourced QC material for vitamin D is mainly comprised of unbound or free Vitamin D then this material does not effectively check on the assay’s capability in releasing bound vitamin D from binding proteins. This is a critical step in measuring 25-hydroxy vitamin D.
If a manufacturer QC for triglyceride is mainly comprised of glycerol instead of triglyceride, then this QC does not check on the assay’s capability to hydrolyse triglyceride to glycerol in the presence of lipoprotein lipase.
S2.3. For Occupational Health and Safety reasons, the QC material selected must be hepatitis C, hepatitis B and human immunodeficiency virus negative and safe for laboratory personnel handling
3. Setting (and Re-Setting) of Quality Control Material Targets
Target means for QC materials should be set using the laboratory analytical mean determined using a relevant published approved guideline.
S3.1. Target SD for QC material must be determined using the laboratory analytical SD following the Clinical and Laboratory Standards Institute (CLSI) EP15-A3 guideline.9
C3.1 There should be an information review to confirm the suitability of laboratory analytical imprecision and its ability to meet various performance goals before adopting it as the QC target SD for a test. This review should include consideration of:
The QC material package insert data, noting the precaution that the manufacturer’s range is often indicative of ±3 SD
The instrument or reagent manufacturers stated assay total CV capabilities at different concentration ranges
The relevant biological variability goals for optimal, minimum and desirable imprecision
The RCPA QAP ALPs, to determine the sigma metric e.g. 6-sigma, 5-sigma, 4-sigma that would be attained if the attained CV is used as the CV goal
The state-of-the-art performance goals, for example against the 20th and 50th percentile CVs attained on the RCPA QAP programs for given analytes.
If a laboratory were to simply adopt the analytical imprecision achieved using the CLSI EP15-3A guideline9 as the analyte QC target SD, it should be aware that this approach does not guarantee that the laboratory will meet any particular performance goals.
C3.2 The initial data set for determining the target mean may only need to contain as few as 20 data points collected over at least 20 separate days.10 It is expected that both the accuracy and imprecision will improve with more data. It is suggested that a review of the data be undertaken after 60 points have been accumulated. Reliable estimation of the target SD requires at least 60–80 data points at each level.11 There should be a review of initial QC set target mean and imprecision data once more data points have been accumulated. It is suggested that there should be a review at one month.
S3.2. Review of QC running mean and SD compared to set targets should occur regularly and it is suggested that this review be at least monthly
C3.3 If there is a significant performance shift with changes in reagent or calibrator lots, the laboratory should confirm if the change is restricted to QC material or is reflected in patient and QAP material. A process for assessing significant changes is given in CLSI EP26-A.12
Multiple Instruments Measuring the Same Analyte in a Laboratory or Network of Laboratories
If the laboratory uses more than one instrument to perform a test, there should already be an assessment of bias between instruments. The CLSI EP15-A3 procedure9 should be followed for each instrument where there is significant bias. Otherwise, all instruments can use the centrally determined value.
S3.3. QC target means should be managed centrally and should be the same value wherever possible. This requires that significant between-analyser bias be eliminated
C3.4 Reagent lot-to-lot variation may cause significant shifts in QC results without affecting patients, and reflects non-commutability of the material. If a single lot cannot be guaranteed for the network, use of different target means will often be required.
S3.4. Adjustments to analyte QC target mean and SD should only be made by a limited number of authorised personnel, and only where necessary (as a last resort)
4. Selection of Quality Control Rules
The laboratory should select an appropriate QC algorithm to ensure good error detection of critical shifts in performance. Where analytical imprecision is poor relative to performance goals, good error detection is hard to achieve regardless of the QC algorithm used.
QC rules requiring more than one QC examination per QC concentration level are rarely if ever actually used in practice and should not be considered as viable candidate QC rules. If quality requirements demand better performance than QC rules evaluating a single measurement per QC concentration can provide, then the solution is to evaluate the QC rule per se more frequently, not increase the number of replicates used in the rule.
Assay capability should be used to determine the QC rules needed. Capability can be expressed in different ways including sigma metrics or assay capability (Cps).1,2
S4.1. QC rule selection should depend on the sigma metric (performance capability) of the test method relative to the quality requirements of the analyte
C4.1 Any human re-evaluation of QC rules acceptance/rejection criteria is inappropriate and merely serves to increase unwanted variation in the process and decreases the predicted performance characteristics of the QC rule. If additional, unambiguous, objective acceptance/rejection criteria that are not part of the original QC rule can be shown to improve QC performance, then these additional criteria should become part of the formal QC rule.
C4.2 QC rules for high capability processes should differ from QC rules for low capability processes.
QC rules for high capability processes i.e. sigma >6 should have a lower false rejection rate than rules for low capability processes.
QC rules for low capability processes may need to be evaluated more frequently than QC rules for high capability processes.
Capability is a common way to assess tests. The smaller the ratio of allowable performance to analytical imprecision, the harder it is to achieve good error detection of critical size errors.
Stable assays can produce excellent results even if they are incapable i.e. sigma <3.
C4.3 A QC rule’s false rejection rate can be characterised in two different ways; the probability of false rejection and the average length of time between false rejections.
The probability of false rejection depends on the QC rule and number of QC results evaluated by the rule.
The average length of time between false rejections depends on a QC rule’s false rejection probability and on the frequency of QC testing.
Both the probability of false rejection and the average length of time between false rejections are important performance characteristics and should be considered when selecting a QC rule.
Power functions are a common way of evaluating and comparing the QC performance characteristics of different QC rules. However, published power function graphs assume that the QC rule’s target mean is set to the instrument’s stable in-control mean and the QC rule’s target SD is set to the instrument’s stable in-control analytical SD. If a QC rule’s target mean is set to a value different from the instrument’s stable in-control mean and/or the QC rule’s target SD is set to a value different from the instrument’s stable in-control analytical SD, then the published power function for the QC rule is not a correct representation of the error detection ability of the QC rule.
C4.4 For Westgard rule algorithms, high error detection for a QC algorithm only occurs where the analytical imprecision is small compared to allowable error. In addition, actual imprecision must be close to QC set target imprecision formprecision for Westgard rules to have the stated error detection. The lower the error detection of the QC algorithm used, the more likely it is that more than one QC run will be required to trigger detection of a critical shift in performance and therefore that patient results with critical errors may be reported before a QC run flags the problem.
C4.5 Newer QC algorithms have focused on minimising the total number of patient results reported after a critical shift, including patient results reported before the QC run which flagged the error. They are based on a warning QC rule, with a more stringent QC run immediately after a QC flag, and using statistical analysis of the number of SDs (scatter around the mean) for each QC value.
Examples of QC Rules
The following are examples of some different rules for different capabilities.
CPs <4: 13s, 22s, R4s with n=4 (2, twice per shift); or
CPs 4–6: 13s, 22s, R4s with n=2 once per shift; or
CPs >6: 13s rule with n=2 once per shift.
A QC rule for highly capable (sigma >4) assays has been proposed by Parvin.13
Notation:
μu = the unbiased concentration of a QC material
μi = instrument’s mean concentration for the QC material
|Bias| = | μi – μu|
SDi = instrument’s standard deviation for the QC material
TEA = allowable total error specification
- The QC rule is defined to have rejection limits given as
- μu± f*TEA (if TEA is given in analyte result units)
- μu *(1 ± f*TEA) (if TEA is given as a percent)
- f is a constant defining a fixed fraction of TEA
- The QC rule rejects if any QC measurement exceeds the limits
- For a rule with f=0.6 values for f were identified that met both criteria;
- Pfr ≤ 0.01
- Ped(SEc) ≥ 0.90
5. Frequency of Running QC
The question of when to run QC samples on different analysers is complex and situation-dependent; however, there are a few general comments that apply in all situations:
Make the time between QC evaluations shorter than the time needed to correct results. i.e. be able to fix it before adverse clinical action is taken on erroneous results.
Know the number of results between QC evaluations.
S5.1. QC Samples should be run anytime something may have compromised the integrity of the clinical diagnostic instrumentation, including:
Reagent lot changes
Routine instrument maintenance
Calibration
New reagent packs
Anytime the instrument is opened and something is adjusted or changed
If trends in patient results have shifted e.g. if patient running means or medians have drifted.
C5.1 Ideally, a QC evaluation should also be conducted before an event that may change the system state (calibration or maintenance). It is acknowledged however, that while such diligence is desirable, logistically it may not be possible. All agreed that events that change the system state should be followed by a QC evaluation prior to restarting patient specimen evaluation.
S5.2 Ceasing patient specimen testing by measurement of QC material and its evaluation facilitates the opportunity of identifying an out-of-control condition that may have started since the last good QC evaluation and correcting any patient specimen results that were compromised because of the out-of-control condition. In the absence of ending specimen testing with a QC evaluation, QC will be left to start up at some later date, which could prevent timely corrections from being made.
C5.3 For low volume testing (<200 patient specimens per day), testing before each work shift (nominally 8 hour) is deemed sufficient. Recall that it is important at the end of the shift to bracket the end of the patient samples with QC samples. For higher volume testing it is suggested that the consensus frequency is set at evaluating QC results every 250 patient specimens.
S5.2. QC frequency should be adjusted for process capability if possible i.e. with more frequent QC conducted for low sigma tests
C5.4 QC frequency may need to be adjusted for critical tests, or tests whose results are acted upon quickly.
C5.5 Running QC in smaller batches more often, is more useful than fewer batches with more samples.
6. Response to Out-of-Range Results
Once an appropriate QC rule has been selected i.e. the rules meets a laboratory’s specified quality performance requirements, its acceptance/rejection decisions should be followed. One should estimate the magnitude of the out-of-control condition before correcting it. It is suggested that one runs a few QCs to establish the magnitude of error. Presumably, this would give an idea of the bias and imprecision profiles of the instrument in failure. However, the number of QCs required to establish this is unclear. Furthermore, the exact statistical procedure to estimate the magnitude of error (and therefore the number and magnitude of patient results likely affected) is also not clearly discussed. The latter may be addressed by calculating the change in sigma value of the instrument in failure using the results of the QCs to estimate the likely number of defects (error) per million.
Key Points
Do not just re-run QC unless it is part of a 12s automatic re-run rule. If the latter is true, check that there have not been any previous re-runs.
Avoid recalibration unless there is a good reason to do so.
S6.1. There must be a documented procedure for all staff to follow if there has been a QC rule failure
C6.1 Suggested Checklist
Are there any instrument flags?
Review the control chart for this analyte. Is this a new problem or is there a trend of some sort? Is it a systematic or random error likely?
Review the QC and charts for other analytes. Is this isolated to a single chemistry or does it affect multiple chemistries?
Is maintenance up to date: daily, weekly, and monthly?
- Check recent history. Has anything changed?
- Are the reagents satisfactory (correct reagent, sufficient volume, shelf expiry date, on-board expiry date)? Open/reconstitute fresh reagent if fault identified with current reagent.
- Are the calibrators satisfactory? Have the correct values been used (lot number and assigned values, preparation, storage, expiry date)?
- Check troubleshooting logs for recurrent instrument problems.
Is the QC material satisfactory? Too old? Made up incorrectly?
Any other instrument issues (probes, lamps, cuvettes, water bath)?
- If all the above satisfactory then re-run QC
- If it passes, go to next.
- If it fails, go to 1.
Record details of the fault and corrective action undertaken.
Troubleshooting for a systematic error on QC
Check the accuracy base.
If another instrument of the same type is available, compare QC between the two instruments.
If patient means are readily available, check these for drift or shift.
Run assayed / assay manufacturer control material for troubleshooting purposes.
Run any available quality assurance samples which have been appropriately stored.
Compare against another laboratory.
Check pipettes are calibrated correctly.
Other
Water (laboratory purified water is within specifications for resistivity or conductivity).
Power supply (stable and not causing spikes that can lead to test-imprecision).
Temperature (including temperature in reaction chamber/cuvette and also ambient laboratory temperature).
Patient Re-run Protocol
S6.2. There must be a documented procedure for re-running any patient samples that were measured between the last successful QC sample and the first failed QC sample
C6.2 Retrospectively re-run samples between the recent failed QC evaluaion and last succesful QC evaluation. There is a need to assess the magnitude of the differences so re-run the last 10 samples. If all differences between pairs is less than the total error specification, there is no need to continue repeating samples, otherwise check the next 10 and keep going back until there is no significant difference between paired values. After this exercise is complete, the total number of samples that need to be re-run is therefore up to the point where the differences in repeats was insignificant or less than the total error specification.
There is very little published information to suggest the batch size to retest. Should we therefore just repeat all the samples between the failed QC and the last known in-control QC? In a laboratory with low volume testing, this may be the preferred option, however, in larger, high test volume laboratories this is impractical. The batches of 10 approach therefore presents a compromise in terms of staff time costs, but is pragmatic and probably economical in terms of reagent cost and presents us with a good opportunity to intervene potential errors in a timely manner.
S6.3. Retract and re-submit new report for all those that differ from previous value by more than the total error specification
Measures of acceptable differences between the original results and the repeat results may include the following, and where CVa equates to the usual analytical CV for the analyte:
<2.33 CVa for a unidirectional QC problem i.e. assay shift up or assay shift down.
<2.77 CVa for a bidirectional QC problem i.e. an increase in assay imprecision.
<intra-individual biological variation.
7. Other Supporting QC Activities
There may be other QC processes in use e.g. Average of Normals (AoN).14 This technique is useful for detecting systematic error for certain analytes and can provide additional QC information in real time. However, AoN is only applicable for some analytes and does require an algorithm that accesses patient results as they are generated. If it is used, it is important to define the following:
The principle of the algorithm.
Number of samples in each block used in the averaging and smoothing process.
Exclusion criteria to be applied for each analyte.
Define the power of error detection and false rejection for each analyte.
Have a documented procedure to follow if the AoN algorithm detects a shift. This may be to run a number of conventional QC samples and then follow a procedure to repeat patient samples such as given in Section 6 above.
Footnotes
Competing Interests: None declared (JC, DC, MM, TB). GJ has received honoraria from Bio-Rad, Roche and Abbott. CP and JY-P are employees and hold stocks of Bio-Rad.
References
- 1.Bais R. Use of Capability Index to improve laboratory analytical performance. Clin Biochem Rev. 2008;29(Suppl 1):S27–31. [PMC free article] [PubMed] [Google Scholar]
- 2.Coskun A. Six Sigma and calculated laboratory tests. Clin Chem. 2006;52:770–1. doi: 10.1373/clinchem.2005.064311. [DOI] [PubMed] [Google Scholar]
- 3.RCPAQAP Allowable Limits of Performance (ALP) https://www.rcpaqap.com.au/wp-content/uploads/2014/02/chempath/docs/ALP_Information_2013.pdf (Accessed 17 July 2015)
- 4.Badrick T. The quality control system. Clin Biochem Rev. 2008;29(Suppl 1):S67–70. [PMC free article] [PubMed] [Google Scholar]
- 5.Westgard Rules. https://www.westgard.com/mltirule.htm (Accessed 17 July 2015)
- 6.Sikaris K. Analytical quality—what should we be aiming for? Clin Biochem Rev. 2008;29(Suppl 1):S5–10. [PMC free article] [PubMed] [Google Scholar]
- 7.Kenny D, Fraser CG, Hyltoft Petersen P, Kallner A. Strategies to set global analytical quality specifications in laboratory medicine – Consensus agreement. Scand J Clin Lab Invest. 1999;59:585. [PubMed] [Google Scholar]
- 8.EFLM Strategic Conference “Defining analytical performance goals 15 years after the Stockholm Conference”. http://www.eflm.eu/files/efcc/EFLM%20Strategic%20Conference_Report.pdf (Accessed 16 July 2015)
- 9.Clinical and Laboratory Standards Institute User verification of precision and estimation of bias; Approved Guideline Third EditionCLSI EP15-A3Wayne, PA, USA: CLSI; 2014 [Google Scholar]
- 10.Clinical and Laboratory Standards Institute . Statistical quality control for quantitative measurement procedures: principles and definitions; Approved Guideline. Third Edition. Wayne, PA, USA: CLSI; 2006. CLSI C24-A3. [Google Scholar]
- 11.Burnett RW. Accurate estimation of standard deviations for quantitative methods used in clinical chemistry. Clin Chem. 1975;21:1935–8. [PubMed] [Google Scholar]
- 12.Clinical and Laboratory Standards Institute . User evaluation of between-reagent lot variation; Approved Guideline. CLSI EP26-A. Wayne, PA, USA: CLSI; 2013. [Google Scholar]
- 13.Parvin CA. QC rules for high sigma-metric processes. Clin Chem. 2014;60:S180. [Google Scholar]
- 14.Westgard JO, Smith FA, Mountain PJ, Boss S. Design and assessment of average of normals (AON) patient data algorithms to maximize run lengths for automatic process control. Clin Chem. 1996;42:1683–8. [PubMed] [Google Scholar]
