Journal of Pathology Informatics

ORIGINAL ARTICLE
Year
: 2022  |  Volume : 13  |  Issue : 1  |  Page : 8-

Measuring digital pathology throughput and tissue dropouts


George L Mutter1, David S Milstone1, David H Hwang1, Stephanie Siegmund1, Alexander Bruce2 
1 Department of Pathology, Brigham and Women's Hospital; Department of Pathology, Harvard Medical School, Boston, MA, USA
2 Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA

Correspondence Address:
Prof. George L Mutter
Department of Pathology, Brigham and Women's Hospital, 75 Francis Street, Boston, MA 02115
USA

Background: Digital pathology operations that precede viewing by a pathologist have a substantial impact on costs and fidelity of the digital image. Scan time and file size determine throughput and storage costs, whereas tissue omission during digital capture (“dropouts”) compromises downstream interpretation. We compared how these variables differ across scanners. Methods: A 212 slide set randomly selected from a gynecologic-gestational pathology practice was used to benchmark scan time, file size, and image completeness. Workflows included the Hamamatsu S210 scanner (operated under default and optimized profiles) and the Leica GT450. Digital tissue dropouts were detected by the aligned overlay of macroscopic glass slide camera images (reference) with images created by the slide scanners whole slide images. Results: File size and scan time were highly correlated within each platform. Differences in GT450, default S210, and optimized S210 performance were seen in average file size (1.4 vs. 2.5 vs. 3.4 GB) and scan time (93 vs. 376 vs. 721 s). Dropouts were seen in 29.5% (186/631) of successful scans overall: from a low of 13.7% (29/212) for the optimized S210 profile, followed by 34.6% (73/211) for the GT450 and 40.4% (84/208) for the default profile S210 profile. Small dislodged fragments, “shards,” were dropped in 22.2% (140/631) of slides, followed by tissue marginalized at the glass slide edges, 6.2% (39/631). “Unique dropouts,” those for which no equivalent appeared elsewhere in the scan, occurred in only three slides. Of these, 67% (2/3) were “floaters” or contaminants from other cases. Conclusions: Scanning speed and resultant file size vary greatly by scanner type, scanner operation settings, and clinical specimen mix (tissue type, tissue area). Digital image fidelity as measured by tissue dropout frequency and dropout type also varies according to the tissue type and scanner. Dropped tissues very rarely (1/631) represent actual specimen tissues that are not represented elsewhere in the scan, so in most cases cannot alter the diagnosis. Digital pathology platforms vary in their output efficiency and image fidelity to the glass original and should be matched to the intended application.


How to cite this article:
Mutter GL, Milstone DS, Hwang DH, Siegmund S, Bruce A. Measuring digital pathology throughput and tissue dropouts.J Pathol Inform 2022;13:8-8


How to cite this URL:
Mutter GL, Milstone DS, Hwang DH, Siegmund S, Bruce A. Measuring digital pathology throughput and tissue dropouts. J Pathol Inform [serial online] 2022 [cited 2022 Jan 19 ];13:8-8
Available from: https://www.jpathinformatics.org/article.asp?issn=2153-3539;year=2022;volume=13;issue=1;spage=8;epage=8;aulast=Mutter;type=0