|
|
RESEARCH ARTICLE |
|
|
|
J Pathol Inform 2014,
5:13 |
Autoverification in a core clinical chemistry laboratory at an academic medical center
Matthew D Krasowski1, Scott R Davis1, Denny Drees1, Cory Morris1, Jeff Kulhavy1, Cheri Crone1, Tami Bebber1, Iwa Clark1, David L Nelson1, Sharon Teul1, Dena Voss1, Dean Aman2, Julie Fahnle2, John L Blau3
1 Department of Pathology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA 2 Department of Pathology, Hospital Computing Information Systems, University of Iowa Hospitals and Clinics, Iowa City, IA, USA 3 Department of Pathology, University of Iowa Hospitals and Clinics, Iowa City, IA; Department of Pathology, University of Michigan, Ann Arbor, Michigan, USA
Date of Submission | 17-Dec-2013 |
Date of Acceptance | 13-Feb-2014 |
Date of Web Publication | 28-Mar-2014 |
Correspondence Address: Matthew D Krasowski Department of Pathology, University of Iowa Hospitals and Clinics, Iowa City, IA USA
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/2153-3539.129450
Abstract | | |
Background: Autoverification is a process of using computer-based rules to verify clinical laboratory test results without manual intervention. To date, there is little published data on the use of autoverification over the course of years in a clinical laboratory. We describe the evolution and application of autoverification in an academic medical center clinical chemistry core laboratory. Subjects and Methods: At the institution of the study, autoverification developed from rudimentary rules in the laboratory information system (LIS) to extensive and sophisticated rules mostly in middleware software. Rules incorporated decisions based on instrument error flags, interference indices, analytical measurement ranges (AMRs), delta checks, dilution protocols, results suggestive of compromised or contaminated specimens, and 'absurd' (physiologically improbable) values. Results: The autoverification rate for tests performed in the core clinical chemistry laboratory has increased over the course of 13 years from 40% to the current overall rate of 99.5%. A high percentage of critical values now autoverify. The highest rates of autoverification occurred with the most frequently ordered tests such as the basic metabolic panel (sodium, potassium, chloride, carbon dioxide, creatinine, blood urea nitrogen, calcium, glucose; 99.6%), albumin (99.8%), and alanine aminotransferase (99.7%). The lowest rates of autoverification occurred with some therapeutic drug levels (gentamicin, lithium, and methotrexate) and with serum free light chains (kappa/lambda), mostly due to need for offline dilution and manual filing of results. Rules also caught very rare occurrences such as plasma albumin exceeding total protein (usually indicative of an error such as short sample or bubble that evaded detection) and marked discrepancy between total bilirubin and the spectrophotometric icteric index (usually due to interference of the bilirubin assay by immunoglobulin (Ig) M monoclonal gammopathy). Conclusions: Our results suggest that a high rate of autoverification is possible with modern clinical chemistry analyzers. The ability to autoverify a high percentage of results increases productivity and allows clinical laboratory staff to focus attention on the small number of specimens and results that require manual review and investigation. Keywords: Algorithms, clinical chemistry, clinical laboratory information system, Epstein-Barr virus, informatics
How to cite this article: Krasowski MD, Davis SR, Drees D, Morris C, Kulhavy J, Crone C, Bebber T, Clark I, Nelson DL, Teul S, Voss D, Aman D, Fahnle J, Blau JL. Autoverification in a core clinical chemistry laboratory at an academic medical center. J Pathol Inform 2014;5:13 |
How to cite this URL: Krasowski MD, Davis SR, Drees D, Morris C, Kulhavy J, Crone C, Bebber T, Clark I, Nelson DL, Teul S, Voss D, Aman D, Fahnle J, Blau JL. Autoverification in a core clinical chemistry laboratory at an academic medical center. J Pathol Inform [serial online] 2014 [cited 2022 May 29];5:13. Available from: https://www.jpathinformatics.org/text.asp?2014/5/1/13/129450 |
Introduction | |  |
Autoverification is a process whereby clinical laboratory results are released without manual human intervention. [1],[2],[3],[4],[5],[6] Autoverification uses predefined computer rules to govern release of results. [1] Autoverification rules may include decisions based on instrument error flags (e.g. short sample, possible bubbles, or clot), interference indices (e.g. hemolysis, icterus, and lipemia), [7] reference ranges, analytical measurement range (AMR), critical values, and delta checks (comparison of current value to previous values, if available, from the same patient). [8],[9],[10],[11],[12],[13] Rules may also define potentially absurd (physiologically improbable) values for some analytes and additionally may control automated dilutions and conditions for repeat analysis of specimens. More sophisticated application of autoverification rules can generate customized interpretive text based on patterns of laboratory values. [3],[14] Autoverification is commonly performed using the laboratory information system (LIS) and/or middleware software that resides between the laboratory instruments and the LIS. [1],[2],[3],[4],[5],[6]
Autoverification can greatly reduce manual review time and effort by laboratory staff, limiting staff screen fatigue caused by reviewing and verifying hundreds to thousands of results per shift. Ideally, autoverification allows laboratory staff to focus manual review on a small portion of potentially problematic specimens and test results. [6] However, improperly designed autoverification can lead to release of results that should have been held, potentially negatively impacting patient management.
There is relatively little published literature on practical use of autoverification. There is a guideline document produced by the Clinical and Laboratory Standards Institute (CLSI) on autoverification of clinical laboratory test results which focuses on the process for validating and implementing autoverification protocols. [15] In this study, we present data on autoverification used in a clinical chemistry core laboratory in an academic medical center. The rules in this laboratory evolved over more than a decade and now result in a high autoverification rate of clinical chemistry tests.
Subjects and Methods | |  |
The institution of this study is a 734-bed tertiary care academic medical center that includes an emergency room with level one trauma capability, adult and pediatric inpatient floors, and multiple intensive care units (neonatal, pediatric, cardiovascular, medical, and surgical/neurologic). Primary care and specialty outpatient services are provided at the main medical center campus as well as a multispecialty outpatient facility located 3 miles away. Pneumatic tube transportation of specimens is available throughout the medical center. Smaller primary care clinics affiliated with the academic health system are dispersed throughout the local region. A core laboratory within the Department of Pathology provides clinical chemistry and hematopathology testing for both outpatient and inpatient services. This study focuses on the clinical chemistry division from 1/1/2000 to 9/21/2013. This study was approved by the University of Iowa Institutional Review Board as a retrospective study.
Throughout the time period of retrospective analysis, the main chemistry instrumentation in the core laboratory was from Roche Diagnostics (Indianapolis, IN, USA). By 2010, the chemistry automation line included five Modular P and four Modular E170 analyzers, with front-end automation provided by a Modular Pre-Analytic (MPA)-7 unit. In 2013, the chemistry instrumentation was replaced with a Cobas 8000 system with two c702, three C502, and five e602 analyzers, still using MPA-7 as the front-end automation. This chemistry automation system currently supports 131 Roche assays and 14 non-Roche assays run as either open or partner channels. Other instrumentation in the core chemistry laboratory includes: Abbott Diagnostics (Abbott Park, IL, USA) Architect i1000 (running cyclosporine, sirolimus, and tacrolimus drug levels, along with human immunodeficiency virus (HIV) testing); Bio-Rad (Hercules, CA, USA) Bioplex 2200 (variety of serologic assays), and Advanced Instruments (Norwood, MA, USA) A2O automated osmometer for serum/plasma and urine osmolality measurements.
The LIS throughout the period of retrospective analysis has been Cerner (Kansas City, MO, USA) "Classic", currently version 015. The LIS is managed by University of Iowa Hospital Computing and Information Services. Roche instruments are interfaced to the LIS via Data Innovations (South Burlington, VT, USA) Instrument Manager ("Middleware") version 8.10. Instruments other than Roche are interfaced to the LIS via Data Innovations Instrument Manager version 8.12. The majority of autoverification rules are in Middleware. A small number of rules are in the LIS. The Roche and non-Roche Middleware production servers each have "shadow" backups in a separate location that can be used if the corresponding main production server fails. There are also separate "test" servers for each system that allow for initial testing and validation of rules without affecting the production servers. Service agreements for instrumentation and middleware are at a level that provides rapid response to problems.
The core laboratory has a supervisor that primarily focuses on middleware, and three clinical laboratory scientists that are tasked with implementing, validating, and maintaining autoverification rules. The core laboratory employs a full-time medical technologist for quality control and improvement. A process improvement team within the chemistry division meets regularly with the medical director to review and discuss laboratory issues. Many of the core laboratory staff is cross-trained on both chemistry and hematology automated instrumentation. This provides important cross-coverage, particularly in periods of instrument, LIS, or middleware downtime.
All autoverification rules require approval by medical director. Core laboratory staff can suspend autoverification, if necessary. The goal of the core laboratory is to have continuous flow of specimens onto the automated line as much as possible, regardless of the inpatient or outpatient unit origin of the specimens. Specimens that are not suitable for the automated line (e.g. small pediatric tubes or tubes requiring manual aliquoting) are handled by a manual exception bench. There is no designation of routine versus stat for ordering of laboratory tests. By 2010, the option to order any chemistry assay on a stat basis was removed from the electronic medical record provider ordering system and on paper requisitions. This change was based on consistent turnaround times and had approval from the hospital subcommittee overseeing laboratory testing.
The autoverification rules currently used were developed mainly over the course of 8 years, although rudimentary autoverification rules within the LIS were first used starting in 2000. Manual review and absurd limits were developed based on analysis of patient data and consultation with clinical services. More involved use of autoverification followed with use of introduction of middleware.
Validation of autoverification rules followed CLSI guideline AUTO10-A. [15] Key elements of the validation protocol are pretesting, simulated patient testing, testing using clinical specimens, approval of documentation, and finally, implementation and maintenance of rules. Where possible, clinical specimens with unusual properties (e.g. extremely low or high concentrations of a specific analyte, very icteric or lipemic specimens) were saved to allow for testing of rules using real specimens. Validation included thorough testing of upper and lower limits of ranges (e.g. reference ranges, absurd ranges), including the boundaries of these ranges. Rules were tested individually and in combination. Finally, integrity of the results to the LIS and finally to the hospital electronic medical record were tested. Common pitfalls encountered in the evolution of autoverification rules were unexpected instrument errors or finding samples with values sought after for testing. In the cases that specimens cannot be found with certain properties, simulated testing had to suffice. [Table 1] contains parameters for all the chemistry assays including manual review limits, critical values, delta checks, interference indices (hemolysis, icterus, lipemia), AMR, and auto-extended range (for automated dilution protocols). | Table 1: Parameters for all the chemistry assays including manual review limits, critical values, delta checks, interference indices (hemolysis, icterus, lipemia), AMR, and auto-extended range (for automated dilution protocols)
Click here to view |
The protocols permit autoverification of critical values provided all other rules are met. Whether a critical value autoverifies or not, a printout is generated that directs laboratory staff to notify the ordering provider by telephone and to document this notification (including read-back and verification of the result by the recipient of the call). A call center within the core laboratory handles the majority of critical value reporting during business hours. When the call center is not open, technologists within the laboratory handle critical value reporting. Critical value reporting and appropriate documentation of the provider call is monitored as a quality metric. Late reports detect failure to document notification of provider of critical values.
Results | |  |
[Figure 1] shows a schematic diagram of autoverification rules used for the chemistry assays. [Table 2] specifies in greater detail the consequences of each step of this process. Some steps such as instrument error flags or violation of manual review limits or delta checks outright prevent autoverification. Note that a test result generating a critical value does not preclude autoverification.
[Figure 2] shows the increasing rate of autoverification over the course of years in the core chemistry laboratory. Starting from an autoverification rate of 40% in 2000 using rudimentary rules within the LIS, the rate increased to 95% by 2007 by initial implementation of middleware rules and then to 99.0% by 2010. The time period from 2005 to 2010 also saw an increase in staff productivity, with an increase of billable tests per hour from 14.9 to 19.7 [Figure 2]. During this time period, the annual testing volume increased from 2.7 million tests/year (2005) to 3.2 million tests/year (2010). Annual test volumes have remained relatively steady since 2010. | Figure 2: Increases in autoverification rate (black circles) and staff productivity (red squares)
Click here to view |
The further increase in autoverification rate from 99.0% in 2010 to the current rate of 99.5% in 2013 was achieved by allowing autoverification of critical values (provided manual review limits or absurd ranges are not violated) and by auto-release of results with interference by hemolysis, lipemia, or icterus. Exceeding interference limits results in text result of "hemolyzed", "lipemic", or "icteric" (i.e. no numeric result is provided), coupled with crediting of charges for that assay. Tests cancelled due to interference can only be overridden at clinician request, with a disclaimer then appended. For critical values that autoverify, the notification of clinical service occurs after autoverification (i.e. as long as autoverification conditions are met, filing of critical value results does not wait for clinician notification).
[Table 3] shows current rates of autoverification for a variety of panels and individual assays. The overall rate of autoverification for all chemistry tests was 99.5% measured over a 4-month period in 2013. The highest rates of autoverification occurred with the most frequently ordered tests such as the basic metabolic panel (sodium, potassium, chloride, carbon dioxide, creatinine, blood urea nitrogen (BUN), calcium, glucose; 99.6%), albumin (99.8%), and alanine aminotransferase (99.7%). The lowest rates of autoverification occurred with some therapeutic drug levels (e.g. gentamicin, lithium, and methotrexate) and with serum free light chains (kappa/lambda), mostly due to need for offline dilution and manual filing of results. [Table 3] includes the most common reason certain tests or test panels failed to autoverify. For the highest volume tests such as albumin or alanine aminotransferase, problems with specimen (e.g. short sample; possible bubble, or clot) were the most common reason for failure to autoverify.
Several rules were put in place to address specific issues with assays or protocols. The total bilirubin assay used shows rare interference by monoclonal gammopathy (most commonly those with IgM as the heavy chain). [16] This interference can be inferred by a discrepancy between the total bilirubin (in mg/dL) and the numeric icteric index (which usually tracks closely with total bilirubin) by greater than 4. This interference has occurred approximately three to four times per year over the course of 3 years. An additional rare problem is hook effect with the myoglobin assay with very high myoglobin concentrations. Dilution protocols were modified following identification of this problem. [17] A protocol for evaluation of possible toxic alcohol or glycol exposure was developed based on osmolal gap. [18],[19]
Another rule was put in place to detect weakly positive (gray zone) hepatitis B surface antigen assay results. This assay was originally reported as reactive or nonreactive, but a retrospective analysis determined that a high fraction of reactive specimens with quantitative results barely above the cutoff index were due either to recent hepatitis B vaccination (the vaccine contains surface antigen) or were false positives that did not confirm with hepatitis B surface antigen neutralization assay. [20] Middleware rules print notification to staff to send out specimen for confirmation by neutralization assay and to notify pathology resident or attending pathologist. Following clinician notification, instances where the gray zone result is attributable to recent vaccination can lead to cancellation of test to avoid falsely labeling patient as hepatitis B positive. This avoids downstream problems such as notification of public health authorities.
Discussion | |  |
Autoverification of laboratory test results is an essential component of increasing efficiency within the clinical laboratories. [1],[2],[3],[4],[5],[6] Despite the importance of autoverification, there is relatively scant literature published on practical application within a clinical laboratory over the course of years.
In this report, we present our experience with autoverification in a busy automated core chemistry laboratory in an academic medical center. The autoverification rules evolved over more than a decade, with a steady increase in autoverification rate to the current rate of 99.5%. The high rate of autoverification is driven in large part by the highest volume tests or test panels (e.g. basic metabolic panel, albumin, alanine aminotransferase, and troponin T), which all have autoverification rates exceeding 99.0%. This frees up staff time to deal with assays such as certain drug levels or endocrinology tests that require offline steps such as manual dilutions, or to investigate questionable test results. Some tests in our study currently have autoverification rates under 90%; however, these tests comprise a small fraction of the total test volume.
To our knowledge, there is very little published data on autoverification of critical values. Our workflows allow for critical value autoverification, provided no other rules are violated. Critical values still require provider notification and subsequent documentation; however, communication of these results by laboratory staff is facilitated by the provider often seeing the autoverified value prior to the call. For example, the emergency treatment center and intensive care units in our medical center use electronic displays or dashboards that continuously display patient data in restricted staff areas. Phone calls to document the critical value proceed more quickly when the laboratory test result has already been seen.
Despite the advantages of autoverification, there are potential negatives that can arise. The validation of autoverification is time-consuming and attention to detail is paramount. Even the most thorough validation plan can miss unexpected instrument error flags or other rare events. It also not possible to test every conceivable combination of rules.
Informatics support is critical to successful implementation and maintenance of autoverification. The most common problems interfering with autoverification would be interruptions of network, LIS, middleware, and/or the interfaces between these systems. In our institution, we have maintained both production and shadow middleware servers in different geographic locations, in addition to using separate test servers for initial testing of middleware rules without compromising the production system. This has reduced period of times the systems are down. The other risk with autoverification, and indeed with increased automation in general, is reduction of staff (both in number of staff and in the mix of level of training and experience) to such a degree that the staff cannot handle downtimes or other challenges without severe compromise of turnaround time.
One benefit of computer rules is to catch rare events that could elude manual verification. Two examples of such rules in our laboratory are plasma albumin exceeding total protein concentration (suggesting an instrument error on one test) and marked discrepancy between total bilirubin and icteric index (possibly caused by monoclonal gammopathy interference). [16] Each of these situations occurs less than 10 times per year in our laboratory, meaning that any particular laboratory staff member may only see such an event once a year or less. Even experienced personnel can miss such combinations of results, especially for patients who have multiple tests ordered on a specimen.
Our experience suggests that successful and continued use of autoverification requires investment in personnel and training over the course of years. Validation of autoverification rules requires high attention to detail. Rules should be based on published evidence and analysis of assays. The use of autoverification also does not obviate need for careful quality control. Lastly, close collaboration between the clinical laboratory and computing services is the key for ongoing success.
Acknowledgements | |  |
The autoverification rules, along with instrument interfaces to middleware and the LIS for which these rules depend, presented in this manuscript were developed over the course of a number of years and would not be possible without the hard work of many pathology and hospital computing staff. The authors gratefully acknowledge support from the Department of Pathology Informatics team (especially Sue Dane) and Hospital Computing Information Systems (especially Nick Dreyer, Karmen Dillon, Kathy Eyres, Rick Dyson, Kurt Wendel, and Jason Smith). The authors also express gratitude to Dr Ronald Feld (emeritus faculty member of University of Iowa Department of Pathology, previous medical director of the clinical chemistry laboratory) and Sue Zaleski (former manager of core laboratory) for supporting and encouraging early efforts to develop autoverification within the core laboratory.
References | |  |
1. | Crolla LJ, Westgard JO. Evaluation of rule-based autoverification protocols. Clin Leadersh Manag Rev 2003;17:268-72.  |
2. | Duca DJ. Autoverification in a laboratory information system. Lab Med 2002;33:21-5.  |
3. | Guidi GC, Poli G, Bassi A, Giobelli L, Benetollo PP, Lippi G. Development and implementation of an automatic system for verification, validation and delivery of laboratory test results. Clin Chem Lab Med 2009;47:1355-60.  |
4. | Lehman CM, Burgener R, Munoz O. Autoverification and laboratory quality. Crit Values 2009;2:24-7.  |
5. | Pearlman ES, Bilello L, Stauffer J, Kamarinos A, Miele R, Wolfert MS. Implications of autoverification for the clinical laboratory. Clin Leadersh Manag Rev 2002;16:237-9.  |
6. | Torke N, Boral L, Nguyen T, Perri A, Chakrin A. Process improvement and operational efficiency through test result autoverification. Clin Chem 2005;51:2406-8.  |
7. | Vermeer HJ, Thomassen E, de Jonge N. Automated processing of serum indices used for interference detection by the laboratory information system. Clin Chem 2005;51:244-7.  |
8. | Kim JW, Kim JQ, Kim SI. Differential application of rate and delta check on selected clinical chemistry tests. J Korean Med Sci 1990;5:189-95.  |
9. | Lacher DA, Connelly DP. Rate and delta checks compared for selected chemistry tests. Clin Chem 1988;34:1966-70.  |
10. | Ovens K, Naugler C. How useful are delta checks in the 21 century? A stochastic-dynamic model of specimen mix-up and detection. J Pathol Inform 2012;3:5.  [PUBMED] |
11. | Park SH, Kim SY, Lee W, Chun S, Min WK. New decision criteria for selecting delta check methods based on the ratio of the delta difference to the width of the reference range can be generally applicable for each clinical chemistry test item. Ann Lab Med 2012;32:345-54.  |
12. | Sheiner LB, Wheeler LA, Moore JK. The performance of delta check methods. Clin Chem 1979;25:2034-7.  [PUBMED] |
13. | Wheeler LA, Sheiner LB. A clinical evaluation of various delta check methods. Clin Chem 1981;27:5-9.  [PUBMED] |
14. | Jones JB. A strategic informatics approach to autoverification. Clin Lab Med 2013;33:161-81.  [PUBMED] |
15. | CLSI. Autoverification of Clinical Laboratory Test Results; Approved Guideline (AUTO10-A). Wayne, PA, USA; 2006.  |
16. | Bakker AJ, Mucke M. Gammopathy interference in clinical chemistry assays: Mechanisms, detection and prevention. Clin Chem Lab Med 2007;45:1240-3.  |
17. | Kurt-Mangold M, Drees D, Krasowski MD. Extremely high myoglobin plasma concentrations producing hook effect in a critically ill patient. Clin Chim Acta 2012;414:179-81.  |
18. | Ehlers A, Morris C, Krasowski MD. A rapid analysis of plasma/serum ethylene and propylene glycol by headspace gas chromatography. Springerplus 2013;2:203.  |
19. | Krasowski MD, Wilcoxon RM, Miron J. A retrospective analysis of glycol and toxic alcohol ingestion: Utility of anion and osmolal gaps. BMC Clin Pathol 2012;12:1.  |
20. | Rysgaard CD, Morris CS, Drees D, Bebber T, Davis SR, Kulhavy J, et al. Positive hepatitis B surface antigen tests due to recent vaccination: A persistent problem. BMC Clin Pathol 2013;12:15.  |
[Figure 1], [Figure 2]
[Table 1], [Table 2], [Table 3]
This article has been cited by | 1 |
A model to establish autoverification in the clinical laboratory |
|
| Deniz Ilhan Topcu, Ozlem Gulbahar | | Clinical Biochemistry. 2021; 93: 90 | | [Pubmed] | [DOI] | | 2 |
Missed critical value callbacks due to middleware flaw |
|
| Vishakantha Murthy, Ghaith Altawallbeh, Michael Rapp, Christine Senn, Amy B. Karger | | Clinical Biochemistry. 2021; 96: 71 | | [Pubmed] | [DOI] | | 3 |
Data on the frequency and causes of icteric interference in clinical chemistry laboratory tests |
|
| Anna E. Merrill, Sandhya Mainali, Matthew D. Krasowski | | Data in Brief. 2021; : 107771 | | [Pubmed] | [DOI] | | 4 |
Frequency of icteric interference in clinical chemistry laboratory tests and causes of severe icterus |
|
| Sandhya Mainali, Anna E. Merrill, Matthew D. Krasowski | | Practical Laboratory Medicine. 2021; 27: e00259 | | [Pubmed] | [DOI] | | 5 |
An Auto-Approval Approach for Laboratory Test Assessment |
|
| Begum Mutlu, Cemal Kazezoglu, Sertan Soylu, Ersen Uzunal, A. Ozen Akyurek, Ebru A. Sezer | | IEEE Access. 2021; 9: 138323 | | [Pubmed] | [DOI] | | 6 |
Development and implementation of an LIS-based validation system for autoverification toward zero defects in the automated reporting of laboratory test results |
|
| Di Jin, Qing Wang, Dezhi Peng, Jiajia Wang, Bijuan Li, Yating Cheng, Nanxun Mo, Xiaoyan Deng, Ran Tao | | BMC Medical Informatics and Decision Making. 2021; 21(1) | | [Pubmed] | [DOI] | | 7 |
Using machine learning to develop an autoverification system in a clinical biochemistry laboratory |
|
| Hongchun Wang, Huayang Wang, Jian Zhang, Xiaoli Li, Chengxi Sun, Yi Zhang | | Clinical Chemistry and Laboratory Medicine (CCLM). 2021; 59(5): 883 | | [Pubmed] | [DOI] | | 8 |
Use of middleware data to dissect and optimize hematology autoverification |
|
| RachelD Starks, AnnaE Merrill, ScottR Davis, DenaR Voss, PamelaJ Goldsmith, BonnieS Brown, Jeff Kulhavy, MatthewD Krasowski | | Journal of Pathology Informatics. 2021; 12(1): 19 | | [Pubmed] | [DOI] | | 9 |
Room Costs for Common Pediatric Hospitalizations and Cost-Reducing Quality Initiatives |
|
| David C. Synhorst, Matthew B. Johnson, Jessica L. Bettenhausen, Kathryn E. Kyler, Troy E. Richardson, Keith J. Mann, Evan S. Fieldston, Matt Hall | | Pediatrics. 2020; 145(6) | | [Pubmed] | [DOI] | | 10 |
General position of Croatian medical biochemistry laboratories on autovalidation |
|
| Vladimira Rimac, Anja Jokic, Sonja Podolar, Jelena Vlasic Tanaskovic, Lorena Honovic, Jasna Lenicek Krleza | | Biochemia medica. 2020; 30(2): 242 | | [Pubmed] | [DOI] | | 11 |
National survey on delta checks in clinical laboratories in China |
|
| Shukang He, Fengfeng Kang, Wei Wang, Bingquan Chen, Zhiguo Wang | | Clinical Chemistry and Laboratory Medicine (CCLM). 2020; 58(4): 569 | | [Pubmed] | [DOI] | | 12 |
Machine Learning Takes Laboratory Automation to the Next Level |
|
| Bradley A. Ford, Erin McElvania, Karen C. Carroll | | Journal of Clinical Microbiology. 2020; 58(4) | | [Pubmed] | [DOI] | | 13 |
Review of interference indices in body fluid specimens submitted for clinical chemistry analyses |
|
| Renee L. Eigsti, Matthew D. Krasowski, Aditi Vidholia, Anna E. Merrill | | Practical Laboratory Medicine. 2020; 19: e00155 | | [Pubmed] | [DOI] | | 14 |
Evaluation of switch from satellite laboratory to central laboratory for testing of intraoperative parathyroid hormone |
|
| Denise Jacob, Geeta Lal, Dena R. Voss, Tami Bebber, Scott R. Davis, Jeff Kulhavy, Sonia L. Sugg, Anna E. Merrill, Matthew D. Krasowski | | Practical Laboratory Medicine. 2020; 22: e00176 | | [Pubmed] | [DOI] | | 15 |
Data on interference indices in body fluid specimens submitted for clinical laboratory analysis |
|
| Renee L. Eigsti, Matthew D. Krasowski, Aditi Vidholia, Anna E. Merrill | | Data in Brief. 2020; 30: 105408 | | [Pubmed] | [DOI] | | 16 |
Common Hormone Therapies Used to Care for Transgender Patients Influence Laboratory Results |
|
| Robert M Humble, Katherine L Imborek, Nicole Nisly, Dina N Greene, Matthew D Krasowski | | The Journal of Applied Laboratory Medicine. 2019; 3(5): 799 | | [Pubmed] | [DOI] | | 17 |
Is Autoverification of Reports the Need of the Hour in Clinical Chemistry Laboratory? |
|
| Swati Rajput, Shilpa Jain | | Indian journal of Medical Biochemistry. 2018; 22(1): 56 | | [Pubmed] | [DOI] | | 18 |
Implementation of the Autovalidation Algorithm for Clinical Chemistry Testing in the Laboratory Information System |
|
| Vladimira Rimac,Ivana Lapic,Kresimir Kules,Dunja Rogic,Marijana Miler | | Laboratory Medicine. 2018; | | [Pubmed] | [DOI] | | 19 |
Impact of reference change value (RCV) based autoverification on turnaround time and physician satisfaction |
|
| Esther Fernández-Grande,Carolina Valera-Rodriguez,Luis Sáenz-Mateos,Amparo Sastre-Gómez,Pilar García-Chico,Teodoro J. Palomino-Muńoz | | Biochemia Medica. 2017; 27(2): 342 | | [Pubmed] | [DOI] | | 20 |
Total laboratory automation: Do stat tests still matter? |
|
| Alberto Dolci,Davide Giavarina,Sara Pasqualetti,Dominika Szoke,Mauro Panteghini | | Clinical Biochemistry. 2017; | | [Pubmed] | [DOI] | | 21 |
Multiple pre- and post-analytical lean approaches to the improvement of the laboratory turnaround time in a large core laboratory |
|
| Amy H. Lou,Manal O. Elnenaei,Irene Sadek,Shauna Thompson,Bryan D. Crocker,Bassam A. Nassar | | Clinical Biochemistry. 2017; | | [Pubmed] | [DOI] | | 22 |
Frequency and Causes of Lipemia Interference of Clinical Chemistry Laboratory Tests |
|
| Sandhya Mainali,Scott R. Davis,Matthew D. Krasowski | | Practical Laboratory Medicine. 2017; | | [Pubmed] | [DOI] | | 23 |
Traditional versus reverse syphilis algorithms: A comparison at a large academic medical center |
|
| Craig D. Dunseth,Bradley A. Ford,Matthew D. Krasowski | | Practical Laboratory Medicine. 2017; 8: 52 | | [Pubmed] | [DOI] | | 24 |
Operational impact of using a vanadate oxidase method for direct bilirubin measurements at an academic medical center clinical laboratory |
|
| Neha Dhungana,Cory Morris,Matthew D. Krasowski | | Practical Laboratory Medicine. 2017; 8: 77 | | [Pubmed] | [DOI] | | 25 |
Delta Check Practices and Outcomes: A Q-Probes Study Involving 49 Health Care Facilities and 6541 Delta Check Alerts |
|
| Ron B. Schifman,Michael Talbert,Rhona J. Souers | | Archives of Pathology & Laboratory Medicine. 2017; 141(6): 813 | | [Pubmed] | [DOI] | | 26 |
Automation in the clinical laboratory: integration of several analytical and intralaboratory pre- and post-analytical systems |
|
| Bakan Ebubekir,Ozturk Nurinnisa,Kilic-Baygutalp Nurcan | | Turkish Journal of Biochemistry. 2017; 0(0) | | [Pubmed] | [DOI] | | 27 |
Nonanalytic Laboratory Automation: A Quarter Century of Progress |
|
| Charles D Hawker | | Clinical Chemistry. 2017; 63(6): 1074 | | [Pubmed] | [DOI] | | 28 |
Next-Generation Sequencing Informatics: Challenges and Strategies for Implementation in a Clinical Environment |
|
| Somak Roy,William A. LaFramboise,Yuri E. Nikiforov,Marina N. Nikiforova,Mark J. Routbort,John Pfeifer,Rakesh Nagarajan,Alexis B. Carter,Liron Pantanowitz | | Archives of Pathology & Laboratory Medicine. 2016; 140(9): 958 | | [Pubmed] | [DOI] | | 29 |
Autoverification of routine coagulation assays in a multi-center laboratory |
|
| Liselotte Onelöv,Elisabeth Gustafsson,Eva Grönlund,Helena Andersson,Gisela Hellberg,Ingela Järnberg,Sara Schurow,Lisbeth Söderblom,Jovan P. Antovic | | Scandinavian Journal of Clinical and Laboratory Investigation. 2016; : 1 | | [Pubmed] | [DOI] | | 30 |
Artificial Neural Network Approach in Laboratory Test Reporting |
|
| Ferhat Demirci,Pinar Akan,Tuncay Kume,Ali Riza Sisman,Zubeyde Erbayraktar,Suleyman Sevinc | | American Journal of Clinical Pathology. 2016; 146(2): 227 | | [Pubmed] | [DOI] | | 31 |
Use of a Rapid Ethylene Glycol Assay: a 4-Year Retrospective Study at an Academic Medical Center |
|
| Sydney L. Rooney,Alexandra Ehlers,Cory Morris,Denny Drees,Scott R. Davis,Jeff Kulhavy,Matthew D. Krasowski | | Journal of Medical Toxicology. 2015; | | [Pubmed] | [DOI] | | 32 |
POSITIVE TEST FOR ANTITHYROGLOBULIN ANTIBODIES DUE TO ADMINISTRATION OF IMMUNOGLOBULIN REPLACEMENT THERAPY IN A PATIENT WITH THYROID CANCER |
|
| Cristina Ogrin,Bradley A. Ford,Stephanie L. Stauffer,Matthew D. Krasowski,Antoine E. Azar | | Endocrine Practice. 2015; 21(8): 966 | | [Pubmed] | [DOI] | | 33 |
Ordering of the Serum Angiotensin-Converting Enzyme Test in Patients Receiving Angiotensin-Converting Enzyme Inhibitor Therapy |
|
| Matthew D. Krasowski,Johanna Savage,Alexandra Ehlers,Jon Maakestad,Gregory A. Schmidt,Sonia Laćulu,Natalie N. Rasmussen,Frederick G. Strathmann,Jonathan R. Genzen | | Chest. 2015; 148(6): 1447 | | [Pubmed] | [DOI] | | 34 |
Impact of add-on laboratory testing at an academic medical center: a five year retrospective study |
|
| Louis S. Nelson,Scott R. Davis,Robert M. Humble,Jeff Kulhavy,Dean R. Aman,Matthew D. Krasowski | | BMC Clinical Pathology. 2015; 15(1) | | [Pubmed] | [DOI] | | 35 |
Promoting improved utilization of laboratory testing through changes in an electronic medical record: experience at an academic medical center |
|
| Matthew D Krasowski,Deborah Chudzik,Anna Dolezal,Bryan Steussy,Michael P Gailey,Benjamin Koch,Sara B Kilborn,Benjamin W Darbro,Carolyn D Rysgaard,Julia A Klesney-Tait | | BMC Medical Informatics and Decision Making. 2015; 15(1): 11 | | [Pubmed] | [DOI] | | 36 |
Effect of specimen type on free immunoglobulin light chains analysis on the Roche Diagnostics cobas 8000 analyzer |
|
| Louis S. Nelson,Bryan Steussy,Cory S. Morris,Matthew D. Krasowski | | SpringerPlus. 2015; 4(1) | | [Pubmed] | [DOI] | |
|
|
 |
 |
|