Skip to main content
Journal of Clinical Laboratory Analysis logoLink to Journal of Clinical Laboratory Analysis
. 2021 Jan 17;35(3):e23699. doi: 10.1002/jcla.23699

Three years’ experience of quality monitoring program on pre‐analytical errors in china

Fengfeng Kang 1,, Weixing Li 1, Xiaohua Xia 1, Zhiming Shan 1
PMCID: PMC7958002  PMID: 33458892

Abstract

Background

Various errors in the procedure of specimen collection have been reported as the primary causes of pre‐analytical errors. The aim of this study was to monitor and assess the reasons and frequencies of rejected samples in China.

Methods

A pre‐analytical external quality assessment (EQA) scheme involving six quality indicators (QIs) was conducted from 2017 to 2019. Rejection rate was calculated for each QI. The difference of the rejection rates over the time was checked by Chi‐square test. Furthermore, the 25th, 50th, and 75th percentiles of the results from total laboratories each year were calculated as optimum, desirable, and minimum level of performance specifications.

Results

In total, 423 laboratories submitted data continuously for six EQA rounds. The overall rejection rates were 0.2042%, 0.1709%, 0.1942%, 0.1689%, 0.1593%, and 0.1491%, respectively. The most common error was sample hemolysed (0.0514%–0.0635%), and the least one was sample not received (0.0008%–0.0014%). A significant reduction in percentages was observed for all QIs. For biochemistry and immunology, hemolysis accounted for more than half of the rejection causes, while for hematology, the primary cause shifted from incorrect fill level to sample clotted. The quality specifications had improved over time, except for the optimum level.

Conclusion

The significant reduction in error rates on sample rejection we observed suggested that laboratories should pay more attention to the standardized specimen collection. We also provide a benchmark for QIs performance specification to help laboratories increase awareness about the critical aspects in the need of improvement actions.

Keywords: external quality assessment, patient safety, pre‐analytical error, quality indicator, specimen rejection


graphic file with name JCLA-35-e23699-g002.jpg

1. INTRODUCTION

It is now clear that laboratory errors are mostly attributable to the lack of standardization or harmonization of some manually intensive activities belonging to the pre‐analytical phase. 1 , 2 Unlike other phases, the identification of pre‐analytical errors remains challenging as most activities are not performed under the direct control of clinical laboratories, along with an insufficient dissemination and application of existing guidelines and recommendations. 3 , 4 Various errors in the procedure of specimen collection have been reported as the primary causes of pre‐analytical errors. 5 As the result of specimen rejection, re‐collection procedure may cause prolongation of turn‐around time (TAT) and affect patient care, which can have a negative impact on clinician decision making and initiation of timely treatment. Thus, improving the quality of specimen is a key factor to assure the desired patient outcome.

Quality indicators (QIs) have proven to be a suitable tool in monitoring laboratory performance throughout the total testing process (TTP), especially for pre‐analytical and post‐analytical phase. 6 According to the ISO 15189:2012, clinical laboratories should identify critical TTP activities and implement QIs in order to highlight and monitor errors when they occur. In the last decades, many efforts have been made in QI harmonization. The working group “Laboratory Errors and Patient Safety” of International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) launched a project aimed at defining a common Model of Quality Indicators (MQI) in 2008. 7 Other programs on QIs have been organized and implemented in several countries. 8 , 9 , 10 , 11

Currently, the usual external quality assessment (EQA) scheme focuses mainly on the analytical phase of laboratory work. In this study, we designed a pre‐analytical EQA scheme to evaluate pre‐analytical errors on specimen collection, which included in six QIs: incorrect sample type, incorrect sample container, incorrect fill level, sample clotted, sample hemolysed, and sample not received. To assess the effect and efficiency of pre‐analytical EQA scheme, the quality monitoring program was conducted with a retrospective analysis of the reasons and frequencies of rejected samples, providing the bases for quality improvement.

2. MATERIALS AND METHODS

2.1. Study design

The pre‐analytical EQA scheme was conducted twice a year from 2017 to 2019 (totally six EQA rounds: 201701, 201702, 201801, 201802, 201901, and 201902), according to Model 3—interpretive proficiency testing scheme as described in ISO/IEC 17043:2010. 12 In this type of EQA scheme, the “EQA sample” may be a questionnaire or case study circulated by the EQA provider to each participant for return of specific answers, which differs significantly from that used in traditional EQA schemes. Clinical laboratories already participated in the EQA programs of the biochemistry, immunology, and hematology, organized by the Zhejiang Centre for Clinical Laboratories during the time, were included in this study.

An electronic questionnaire including two parts was sent to each laboratory. The first part was about the general laboratory information, including the type of laboratory, the laboratory information system, and the laboratory personnel. The second part aimed to collect the data of the rejected samples on account of different errors as listed in Table 1, together with total number of samples for clinical biochemistry, immunology, and hematology tests in assigned month (June for the first round and November for the second round, per year). Laboratories were asked to submit the questionnaire within 1 month upon receipt. After each round of pre‐analytical EQA scheme, a summary report was sent to each participant.

TABLE 1.

List of QIs on specimen rejection

Code Quality indicator Definition Calculation formula
QI 1 Incorrect sample type Sample with wrong or inappropriate sample matrix (e.g., whole blood instead of plasma. Note: Drawing blood in the wrong vacuum collection tube will also trigger the incorrect sample type, which is the root cause that the testing cannot be continued. Such kind of error is classified as incorrect sample type) Number of samples of wrong or inappropriate type / total number of samples over the same period
QI 2 Incorrect sample container Sample collected in wrong container with no change in sample type (such as non‐sterile containers, unsealed containers, or damaged containers. Note: Drawing the blood in the wrong vacuum collection tube is not included, as the root cause that the testing cannot be continued is inappropriate sample type) Number of samples collected in wrong container / total number of samples over the same period
QI 3 Incorrect fill level Sample with insufficient or excessive sample volume Number of samples with unsatisfactory sample volume / total number of samples over the same period
QI 4 Sample clotted Sample clotted. Number of samples clotted / total number of samples over the same period
QI 5 Sample hemolysed Sample with free Hb>0.5 g/L or visible hemolysis Number of samples with free Hb>0.5 g/L or visible hemolysis / total number of samples over the same period
QI 6 Sample not received Sample collected but not received by the laboratory or lost Number of samples not received / total number of samples over the same period

2.2. Statistical analysis

A retrospective analysis of the data obtained from laboratories participated consecutively over six EQA rounds was presented in this study. All statistical analyses were performed with IBM SPSS Statistics for Windows (version 20.0 IBM Corporation, Armonk, New York, USA) and Microsoft Excel 2010 (Microsoft Corporation). After removal of obviously incorrect data, the calculation of “overall rejection rate,” “overall rejection rate due to each error type” and “rejection rate of each laboratory due to each error type” were conducted for each EQA round.

Kolmogorov‐Smirnov test was performed to check the normality of the data distribution of error rates and to select the appropriate parametric or non‐parametric statistics. The differences of error rates between the first (201701) and the last round (201902) were calculated and further evaluated by Chi‐square test. A p‐value of less than 0.05 was considered statistically significant.

The preliminary performance specifications were established on the basis of data collected in the second EQA round of each year. As the Kolmogorov‐Smirnov test showed the distribution of the error rates were non‐normal, non‐parametric statistics were selected. The 25th, 50th, and 75th percentiles of the results from total laboratories were calculated as optimum, desirable, and minimum performance level according to the proposal by Fraser et al. 13 95% confidence intervals of the percentiles were calculated with bootstrap statistics in SPSS.

3. RESULTS

3.1. General information on participating laboratories

There were 532, 595, and 434 laboratories participated in the pre‐analytical EQA scheme in 2017, 2018, and 2019, respectively. Of the responding laboratories, 423 laboratories submitted data continuously and completely for all six EQA rounds, and were finally included. Most laboratories (371/423; 87.7%) were public, leaving 52 laboratories were private. Of all the public laboratories, participants from general hospitals accounted for 65.2% (242/371), while specialized hospitals accounted for 34.8% (129 /371).

3.2. Overall rejection rate

During the study period, the total number of samples had increased significantly, as displayed in Table 2. The largest sample size was seen in hematology, while the least was immunology. The overall rejection rates showed significant reduction over time for three disciplines. The highest rejection rate was observed in clinical biochemistry, showing the largest reduction as well.

TABLE 2.

Total sample tests for 423 laboratories and corresponding rejection rates: overall, biochemistry, immunology, and hematology

Round Difference (%) a p‐value**
201701 201702 201801 201802 201901 201902
Overall Total Sample, n 17,261,124 18,961,868 18,488,068 20,921,397 25,018,326 25,133,829
Rejected sample, n 35,243 32,399 35,908 35,344 39,363 37,587
Rejections, % 0.2042 0.1709 0.1942 0.1689 0.1573 0.1495 −26.8 <0.001
Biochemistry Total Sample, n 4,579,788 5,353,166 5,194,832 5,879,552 7,372,057 7,256,575
Rejected sample, n 13,100 11,064 12,162 12,654 13,542 12,853
Rejections, % 0.286 0.2067 0.2341 0.2152 0.1837 0.1771 −38.1 <0.001
Immunology Total Sample, n 4,212,350 4,677,645 4,267,843 4,731,471 6,721,045 6,625,359
Rejected sample, n 5,478 5,338 6,919 6,491 6,704 6,809
Rejections, % 0.13 0.1141 0.1621 0.1372 0.0997 0.1028 −20.9 <0.001
Hematology Total Sample, n 8,468,986 8,931,057 9,025,393 10,310,374 10,925,224 11,251,895
Rejected sample, n 16,665 15,997 16,827 16,199 19,117 17,925
Rejections, % 0.1968 0.1791 0.1864 0.1571 0.1750 0.1593 −19.1 <0.001
a

Difference (%) between the last round (201902) and the first round (201701), calculated from the 8th column and 3rd column.

**

p‐value of Chi‐square test, to evaluate the difference of error rates between the first (201701) and the last round (201902).

3.3. Overall rejection rate due to six pre‐analytical errors

The categories and frequencies of sample rejection causes are shown in Table 3. The highest error rates were observed in sample hemolysed, followed by sample clotted and incorrect fill level. Sample not received always kept representing a low prevalence at about 0.001%. The all‐cause rejection rates were reduced over time with statistical significance. The greatest magnitude of decrease was observed in incorrect sample type, from 0.0427% to 0.0197%, while the least was seen in sample clotted.

TABLE 3.

Overall rejection rates due to six pre‐analytical errors

Quality indicator Round Difference (%) a p‐value****
201701 201702 201801 201802 201901 201902
Incorrect sample type Rejected sample, n 7377 6447 6538 5772 5330 4955
Rejections, % 0.0427 0.034 0.0354 0.0276 0.0213 0.0197 −53.9 <0.001
Incorrect container Rejected sample, n 2992 3095 3109 3012 2959 2728
Rejections, % 0.0173 0.0163 0.0168 0.0144 0.0118 0.0108 −37.6 <0.001
Incorrect fill level Rejected sample, n 9124 7002 8415 7924 9315 9349
Rejections, % 0.0529 0.0369 0.0455 0.0379 0.0372 0.0371 −29.9 <0.001
Sample clotted Rejected sample, n 5349 5597 5839 5607 7715 7392
Rejections, % 0.0310 0.0295 0.0316 0.0268 0.0308 0.0294 −5.2 0.004
Sample hemolysed Rejected sample, n 10,166 10,021 11,740 12,797 13,759 12,972
Rejections, % 0.0589 0.0528 0.0635 0.0612 0.055 0.0514 −12.7 <0.001
Sample not received Rejected sample, n 235 237 267 232 285 191
Rejections, % 0.0014 0.0012 0.0014 0.0011 0.0011 0.0008 −42.9 <0.001
a

Difference (%) between the last round (201902) and the first round (201701), calculated from the 8th column and 3rd column.

****

p‐value of Chi‐square test, to evaluate the difference of error rates between the first (201701) and the last round (201902).

Among the reasons for rejected samples, hemolysis accounted for high frequencies of 46.0% and 43.4% in the first round for biochemistry and immunology, respectively, with a slightly increased proportion in the last round as shown in Figure 1. For hematology, the main rejection cause was incorrect fill level, but turned into anticoagulant sample clotted and incorrect fill level in the last round.

FIGURE 1.

FIGURE 1

Distribution of sample rejection causes in rounds 201701 and 201902 for biochemistry, immunology, and hematology.

3.4. Preliminary performance specification

Optimum, desirable, and minimum level of performance specifications based on the state‐of‐the‐art for six QIs are presented in Table 4. The ranges between the optimum and minimum level were quite huge for most QIs. The minimum and desirable performance specifications for most QIs improved over time, while the optimum level declined for some QIs, such as incorrect container, sample hemolysed.

TABLE 4.

Preliminary performance specifications based on the 25th, 50th, and 75th percentiles of results with 95% confidence interval for pre‐analytical QIs

Quality indicator Year Results (95% confidence interval)
25th percentile 50th percentile 75th percentile
Incorrect sample type 2017 0.0037 0.0208 0.0619
(0.0012, 0.006) (0.0163, 0.0263) (0.0546, 0.079)
2018 0.0025 0.0164 0.0553
(0, 0.0044) (0.0123, 0.0204) (0.0416, 0.0744)
2019 0.0018 0.0126 0.0359
(0, 0.0042) (0.0105, 0.0161) (0.0283, 0.044)
Incorrect container 2017 0 0.008 0.0311
(0, 0) (0.0061, 0.0107) (0.0241, 0.0375)
2018 0 0.0077 0.0267
(0, 0) (0.0054, 0.0111) (0.0217, 0.034)
2019 0.0016 0.0089 0.0235
(0, 0.0029) (0.0074, 0.0106) (0.019, 0.0275)
Incorrect fill level 2017 0.0065 0.0276 0.0733
(0.0027, 0.0111) (0.0215, 0.0326) (0.0599, 0.0919)
2018 0.0057 0.0251 0.0672
(0, 0.0084) (0.0191, 0.0295) (0.0558, 0.0858)
2019 0.0074 0.0229 0.0521
(0.0049, 0.0087) (0.02, 0.0271) (0.0458, 0.0667)
Sample clotted 2017 0.0213 0.0622 0.1379
(0.0149, 0.0275) (0.0511, 0.0718) (0.1136, 0.1653)
2018 0.0216 0.0601 0.1261
(0.0164, 0.026) (0.05, 0.0721) (0.1118, 0.1441)
2019 0.0192 0.051 0.1213
(0.0143, 0.0229) (0.0453, 0.0601) (0.1046, 0.1439)
Sample hemolysed 2017 0.0048 0.0281 0.0857
(0.0018, 0.008) (0.024, 0.0357) (0.0709, 0.1037)
2018 0.005 0.0305 0.087
(0.0033, 0.0083) (0.0235, 0.0375) (0.075, 0.106)
2019 0.008 0.0281 0.0757
(0.0066, 0.0099) (0.0225, 0.0328) (0.0666, 0.0885)
Sample not received 2017 0 0 0
(0, 0) (0, 0) (0, 0)
2018 0 0 0
(0, 0) (0, 0) (0, 0)
2019 0 0 0
(0, 0) (0, 0) (0, 0)

4. DISCUSSION

The results of our study show the performance of six pre‐analytical QIs on specimen rejection in laboratories in Zhejiang Province of China. The overall rejection rate was 0.2042% in the first round, and decreased to 0.1491% in the last round with statistical significance. The rates were a bit lower when compared to the reported data from the College of American Pathologists Q‐Probes and Q‐Tracks studies that ranged from 0.2% to 0.83%. 14 , 15 , 16 , 17 , 18 , 19 It might partly be due to the fact that the calculation of the overall rejection rate is based the six QIs in this study. Other errors may also lead to specimen rejection, such as barcoding error, sample label error.

Sample hemolysis is often the most frequent source of pre‐analytical error. 16 , 20 It was also the primary rejection cause for biochemistry and immunology in this study, varying from 0.05% to 0.06%. Ricos's study also reported the similar results. 21 Hemolysis may occur in all stages of sample collection. Lack of knowledge of blood collection procedure for phlebotomy personnel is noted as the biggest problem, including inappropriate phlebotomy equipment, tube additives and site of collection. 22 Other factors, including improper transport temperature, excessive pre‐analytical TAT, or centrifugal condition may also lead to higher hemolysis rates.

For hematology, sample clotted and incorrect fill level were the main reasons for rejection. The overall rejection rate of sample clotted floated at 0.03% over time and showed the smallest change. The results of other studies varied greatly. Ricos et al summarized that sample clotted rates for hematology were 0.09% in Spain and 0.30% in the USA. 21 Llopis et al stated sample rejection due to clotting was 0.054% in Spanish pre‐analytical quality monitoring program. 23 The main cause of clotting is directly related to the blood collection process and attributed to human factors, such as absence of standardized collection procedure, insufficient mixing after blood withdrawal, and prolonged storage. 24 , 25

The greatest magnitude of decrease was observed for the rate of incorrect sample type. Similar trend could be seen for incorrect container, accounting for a relatively small proportion of rejection causes for all three disciplines. The rates were similar to Llopis's study, which stated the rate of incorrect container as 0.013% and 0.009% for two time periods. 23 Sample not received was the least frequent error and also showed a significant decrease. However, the studies in Spain and in the USA indicated the rates were much higher of 0.23% and 0.01%, respectively. 21

To identify and reduce pre‐analytical errors on specimen collection, some practices might be suggested: (1) Specific time intervals or wards with higher error rates should be identified through intra‐laboratory QI monitoring;(2) tools such as FMEA (failure mode and effect analysis) or RCA (root cause analysis) can be used to check and review the errors 26 ; (3) nurses, especially medical interns, should be formally trained on standardized and regular specimen collection and transportation 27 ; (4) suitable risk management strategies and efforts should be put into operation to prevent risks in patient care. 28

The 25th, 50th and 75th percentiles of the results obtained in all laboratories reflect the state‐of‐the‐art of the laboratories. The decision to propose three performance levels would encourage laboratories to gradually improve their performance and recognize a possible negative trend when their performance shifts from an optimum, to a desirable or minimum level. If three levels of performance specifications are identical, only one specification is required. Therefore, the specification of sample not received should be 0%. Certainly, the specification is not static and should be adjusted according to the latest monitoring results. For most QIs, as the error rates showed a decline trend, the corresponding quality specifications had been improved over time, especially for the minimum and desirable level of quality specifications. If the new state‐of‐the‐art is not improved, previous performance specifications should be active.

As mentioned above, there was a significant downward trend in the overall rejection rate. Periodical participation in pre‐analytical quality monitoring programs is encouraged. 7 With continuous inter‐laboratory monitoring, the laboratory can compare the results with others and work to reduce the risk to an acceptable level by decreasing the frequency of QI or increasing the detectability. 29 We believe that the pre‐analytical EQA scheme is a good way to assess and manage the pre‐analytical errors.

A limitation of our study is that we were unable to directly collect the data from the LIS of laboratories. Laboratories with good practice are often more willing to report their data. Hence, the results in this program may represent the laboratories with relatively better performance. To strengthen the information construction of laboratory external monitoring is the next step.

5. CONCLUSION

The significant reduction in sample rejection rate we observed suggested that laboratories should pay more attention to the standardized specimen collection. We also provide a benchmark for performance specifications on pre‐analytical QIs to help laboratories increase awareness about the critical aspects in the need of improvement actions.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

AUTHOR CONTRIBUTIONS

Fengfeng Kang designed the study and drafted the manuscript. Zhiming Shan and Xiaohua Xia collected and analyzed the data. Weixing Li revised the article. All authors reviewed the manuscript and approved the final manuscript.

ACKNOWLEDGMENTS

We thank the clinical laboratories participating in this quality monitoring program. This work was supported by Zhejiang Provincial Project for Medical and Health Science and Technology (Grant No. 2018KY009); Project for Science Technology Department of Zhejiang Province (Grant No. 2020C35057).

DATA AVAILABILITY STATEMENT

All the data related to this work are available from the corresponding author upon request.

REFERENCES

  • 1. Lippi G, Baird GS, Banfi G, et al. Improving quality in the preanalytical phase through innovation, on behalf of the European Federation for Clinical Chemistry and Laboratory Medicine (EFLM) Working Group for Preanalytical Phase (WG‐PRE). Clin Chem Lab Med. 2017;55:489‐500. [DOI] [PubMed] [Google Scholar]
  • 2. Lima‐Oliveira G, Volanski W, Lippi G, Picheth G, Guidi GC. Pre‐analytical phase management: a review of the procedures from patient preparation to laboratory analysis. Scand J Clin Lab Invest. 2017;77:153‐163. [DOI] [PubMed] [Google Scholar]
  • 3. Lippi G, Simundic AM. European Federation for Clinical Chemistry and Laboratory Medicine (EFLM) Working Group for Preanalytical Phase (WG‐PRE). The EFLM strategy for harmonization of the preanalytical phase. Clin Chem Lab Med. 2018;56:1660‐1666. [DOI] [PubMed] [Google Scholar]
  • 4. Giavarina D, Lippi G. Blood venous sample collection: Recommendations overview and a checklist to improve quality. Clin Biochem. 2017;50:568‐573. [DOI] [PubMed] [Google Scholar]
  • 5. Lippi G. Governance of preanalytical variability: travelling the right path to the bright side of the moon? Clin Chim Acta. 2009;404:32‐36. [DOI] [PubMed] [Google Scholar]
  • 6. Novis DA. Detecting and preventing the occurrence of errors in the practices of laboratory medicine and anatomic pathology: 15 years’ experience with the College of American Pathologists’ Q‐PROBES and Q‐TRACKS programs. Clin Lab Med. 2004;24:965‐978. [DOI] [PubMed] [Google Scholar]
  • 7. Sciacovelli L, Panteghini M, Lippi G, et al. Defining a roadmap for harmonizing quality indicators in Laboratory Medicine: a consensus statement on behalf of the IFCC Working Group "Laboratory Error and Patient Safety" and EFLM Task and Finish Group "Performance specifications for the extra‐analytical phases". Clin Chem Lab Med. 2017;55:1478‐1488. [DOI] [PubMed] [Google Scholar]
  • 8. Shcolnik W, de Oliveira CA, de Sao Josè AS, de Oliveira Galoro CA, Plebani M, Burnett D. Brazilian laboratory indicators program. Clin Chem Lab Med. 2012;50:1923‐1934. [DOI] [PubMed] [Google Scholar]
  • 9. Kirchner MJ, Funes VA, Adzet CB, et al. Quality indicators and specifications for key processes in clinical laboratories: a preliminary experience. Clin Chem Lab Med. 2007;45:672‐677. [DOI] [PubMed] [Google Scholar]
  • 10. Barth JH. Clinical quality indicators in laboratory medicine. Ann Clin Biochem. 2012;49:9‐16. [DOI] [PubMed] [Google Scholar]
  • 11. Fei Y, Kang F, Wang W, et al. Preliminary probe of quality indicators and quality specification in total testing process in 5753 laboratories in China. Clin Chem Lab Med. 2016;54:1337‐1345. [DOI] [PubMed] [Google Scholar]
  • 12. ISO/IEC 17043 . Medical Laboratories ‐ Requirements for Quality and Competence. Geneva, Switzerland: International Organization for Standards; 2010. [Google Scholar]
  • 13. Fraser CG, Hylton Petersen P, Libeer J‐C, Ricos C. Proposal for setting generally applicable quality goals solely based on biology. Ann Clin Biochem. 1997;34:1‐8. [DOI] [PubMed] [Google Scholar]
  • 14. Dale JC, Novis DA. Outpatient phlebotomy success and reasons for specimen rejection: a Q‐Probes study. Arch Pathol Lab Med. 2002;126:416‐419. [DOI] [PubMed] [Google Scholar]
  • 15. Zarbo RJ, Jones BA, Friedberg RC, et al. Q‐Tracks: a College of American Pathologists program of continuous laboratory monitoring and longitudinal performance tracking. Arch Pathol Lab Med. 2002;126:1036‐1044. [DOI] [PubMed] [Google Scholar]
  • 16. Jones BA, Calam RR, Howanitz PJ. Chemistry specimen acceptability, a College of American Pathologists Q‐Probes study of 453 laboratories. Arch Pathol Lab Med. 1997;121:19‐26. [PubMed] [Google Scholar]
  • 17. Nakhleh RE, Souers RJ, Bashleben CP, et al. Fifteen years’ experience of a College of American Pathologists program for continuous monitoring and improvement. Arch Pathol Lab Med. 2014;138:1150‐1155. [DOI] [PubMed] [Google Scholar]
  • 18. Karcher DS, Lehman CM. Clinical consequences of specimen rejection: a College of American Pathologists Q‐Probes analysis of 78 clinical laboratories. Arch Pathol Lab Med. 2014;138:1003‐1008. [DOI] [PubMed] [Google Scholar]
  • 19. Meier FA, Souers RJ, Howanitz PJ, et al. Seven Q‐Tracks monitors of laboratory quality drive general performance improvement: experience from the College of American Pathologists Q‐Tracks program 1999–2011. Arch Pathol Lab Med. 2015;139:762‐775. [DOI] [PubMed] [Google Scholar]
  • 20. Goswami B, Singh B, Chawla R, Mallika V. Evaluation of errors in a clinical laboratory: a one‐year experience. Clin Chem Lab Med. 2010;48:63‐66. [DOI] [PubMed] [Google Scholar]
  • 21. Ricós C, García‐Victoria M, Fuente BDL. Quality indicators and specifications for the extra‐analytical phases in clinical laboratory management. Clin Chem Lab Med. 2004;42:578‐582. [DOI] [PubMed] [Google Scholar]
  • 22. Simundic AM, Topic E, Nikolac N, Lippi G. Hemolysis detection and management of hemolysed specimens. Biochem Med. 2010;20:154‐159. [Google Scholar]
  • 23. Llopis MA, Bauca JM, Barba N, Alvarez V, Ventura M, Ibarz M. Spanish Preanalytical Quality Monitoring Program (SEQC), an overview of 12 years’ experience. Clin Chem Lab Med. 2017;55:530‐538. [DOI] [PubMed] [Google Scholar]
  • 24. Li HY, Huang XN, Yang YC, et al. Reduction of preanalytical errors in clinical laboratory through multiple aspects and whole course intervention measures. J Evid Based Med. 2014;7:172‐177. [DOI] [PubMed] [Google Scholar]
  • 25. Linskens EA, Devreese KM. Pre‐analytical stability of coagulation parameters in plasma stored at room temperature. Int J Lab Hematol. 2018;40:292‐303. [DOI] [PubMed] [Google Scholar]
  • 26. Green SF. The cost of poor blood specimen quality and errors in preanalytical processes. Clin Biochem. 2013;46:1175‐1179. [DOI] [PubMed] [Google Scholar]
  • 27. Romero A, Cobos A, Gómez J, Muñoz M. Role of training activities for the reduction of pre‐analytical errors in laboratory samples from primary care. Clin Chim Acta. 2012;413:166‐169. [DOI] [PubMed] [Google Scholar]
  • 28. Romero A, Gómez‐Salgado J, Domínguez‐Gómez JA, Ruiz‐Frutos C. Integrating research techniques to improve quality and safety in the preanalytical phase. Lab Med. 2018;49:179‐189. [DOI] [PubMed] [Google Scholar]
  • 29. Karadağ C, Demirel NN. Continual improvement of the pre‐analytical process in a public health laboratory with quality indicators‐based risk management. Clin Chem Lab Med. 2019;57:1530‐1538. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All the data related to this work are available from the corresponding author upon request.


Articles from Journal of Clinical Laboratory Analysis are provided here courtesy of Wiley

RESOURCES