|J Pathol Inform 2021,
Pathology Visions 2020: Through the Prism of Innovation
|Date of Web Publication||24-Sep-2021|
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
. Pathology Visions 2020: Through the Prism of Innovation. J Pathol Inform 2021;12:37
| Oral and Poster Abstracts|| |
The virtual Pathology Visions 2020 (PV20) meeting of the Digital Pathology Association (DPA), held on October 26–29, set new records in almost every way. In response to the COVID-19 pandemic, we strategically changed the original face-to-face meeting in Orlando, FL, USA, to a virtual meeting. Unlike any other conference, PV20 proved that the power of digital technology coupled with the dedication and creativity of the DPA is unstoppable to create an engaging educational milestone in promoting the advancement of digital pathology (DP) and artificial intelligence (AI). The theme of this year was “Through the Prism of Innovation” and innovation was the word of the week!
The virtual format allowed us to include more speakers, educational content, and opportunities for networking. It also allowed attendees to access some of the prerecorded lectures in advance of the meeting to better prepare for the live question and answer (Q&A) sessions with speakers.
The record-breaking 503 attendees engaged in 3299 visits to the meeting's virtual website. They included pathologists, scientists, technologists, administrators, and industry partners who established 119,920 Desktop Pageviews and 16,018 Mobile Pageviews to listen to lectures, visit online abstracts, and share 597 public messages and 851 private messages. We also saw an increase to 26.87% international participants and 45 residents, fellows, PhD candidates, and medical students. These numbers attest to the success of the online communications that enhance the progress in applications of DP to healthcare and life sciences.
Michael Rivers, DPA President, kicked off the meeting with a thoughtful and stimulating opening session. In the keynote address, “Going beyond Sight, Revealing Multiplex Biology,” we heard the exciting advances pioneered by Dr. Jared Burks of MD Anderson Cancer Center. A plenary presentation from Dr. Joel Saltz of Stony Brook University introduced us to “Artificial Intelligence-Driven Pathomics: The Next Diagnostic Frontier.” The virtual format allowed attendees to preview, on their own time, the many presentations by distinguished speakers in both the Clinical Track and the Education and Research Track. The Q&A sessions, which were held in real time, provided the opportunity for attendees to interact and engage with speakers as well as each other. Vendor workshops and showcase Q&A sessions enhanced the dissemination of knowledge about the latest offerings on display.
Attendees from all over the world including the United States, Canada, Europe, Australia, South America, and Asia came together to hear the Chairs of four major US academic institutions who discuss “Implementing and Adopting Digital Pathology for Clinical Diagnostics” as well as an international expert panel comment on their “Perspective of DP/AI/ML and a Snapshot of the Response to the COVID-19 Pandemic.”
The regulatory and standards update provided the latest information on changes in the DP field as well as the DPA's continued endeavors with the FDA. The Connectathon panel, in collaboration with Digital Imaging and Communications in Medicine, gave us hope for a more open and interactive future.
The 45 scientific posters introduced the latest advances in the field, and attendees had the opportunity to virtually meet budding talent in DP investigation. We congratulate this year's poster award winners:
BEST CLINICAL - P12
Artificial intelligence estimation of gestational age based on microscopic appearance of placental villi
Presented by: Jeffery Goldstein, Northwestern University
BEST EDUCATION - P11
COVID challenges, digital solutions
Presented by: Christopher Girardo, Louisiana State University
BEST RESEARCH - P8
Real-time, point-of-care pathology diagnosis via embedded deep learning
Presented by: Bowen Chen, Brigham and Women's Hospital
BEST IMAGE ANALYSIS - P2
Computer-assisted approach to improve the detection of tall cell variant of papillary thyroid carcinoma
Presented by: Asmaa Aljuhani, The Ohio State University
BEST POSTER BY A TRAINEE - P44
Deep learning of attention-guided multimodal histopathology search on social media
Presented by: Andrew Schaumberg, Harvard Medical School
Trainee award recipients were also recognized at PV20. Please join us in congratulating them.
- Lin He, MD, PhD | Resident, UT Southwestern Medical Center
- Peter Louis, MD, JD, MT (ASCP) | Fellow, Vanderbilt University Medical Center
- Yan Xiang, MD, MBA | Resident, Hospital of the University of Pennsylvania.
13 CME and CE credits were also offered through the College of American Pathologists. Pathology Visions features the most up-to-date information on DP and AI. The recorded presentations will be available on the DPA website, and abstracts of the oral and poster presentations are published in this edition of the Journal of Pathology Informatics.
Thank you to the members of this year's Program Committee who stopped at nothing to ensure the success of this meeting, and a very special thanks to Ms. Abbey Norris, who went above and beyond the call of duty to transition this meeting to the virtual platform midway through the planning process.
- Co-Chair Sylvia Asa, MD, PhD, University Hospitals Cleveland (USA)
- Co-Chair Marilyn Bui, MD, PhD, Moffitt Cancer Center (USA)
- Junya Fukuoka, MD, PhD, Nagasaki University School of Medicine (Japan)
- Eric Glassy, MD, Affiliated Pathologists Medical Group (USA)
- Mike Isaacs, Washington University School of Medicine (USA)
- Lisa Manning, MLT, BSc. (Hon), Shared Health Manitoba (Canada)
- Anil Parwani, MD, PhD, MBA, Ohio State University (USA)
- Hari Trivedi, MD, Emory University (USA)
- Christopher Ung, MSc, MBA, HistoGenex (USA)
- Mark Zarella, PhD, Johns Hopkins University (USA)
It is very energizing to see the resilience of the DP/AI community thriving in the face of such a challenge. The time of unleashing the power of DP/AI to benefit precision medicine has arrived. We hope you will find the information in the abstracts useful. We strive to be your best partner in DP/AI education.
| Oral Abstracts|| |
| Technical Performance of FDA-Cleared Medical-Grade Displays versus Consumer-Off-The-Shelf and Professional-Grade Displays|| |
Jacob T. Abel1, David S. McClintock1
1Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
E-mail: [email protected]
Background: As currently defined, the display (monitor) is an integral part of the digital pathology pixel pathway. During FDA validation studies, vendors have selected medical-grade (MG) over consumer-off-the-shelf (COTS) or professional-grade (PG) displays. MG and PG displays are reputed to achieve greater luminance (light emitted per square area), better luminance uniformity (control), and more accurate color representation as compared to COTS displays, albeit at a much higher cost. We present data comparing the characteristics of the (to-date) two FDA-cleared MG displays (Dell MR2416, Philips PP27QHD) with three COTS and four PG displays, along with discussion regarding how this could impact display selection for primary diagnosis. Methods: Absolute luminance, luminance uniformity, and color measurements were taken with an X-Rite PANTONE i1Basic Pro 2 spectrophotometer using DisplayCAL software. The displays were calibrated to 250 cd/m2 and 12 sets of luminance/color measurements were taken over a period of 4 weeks. Delta E-2000 values measuring the difference in luminance between the outer edges and the center of the display were calculated to assess uniformity. Calibrated color accuracy measurements were assessed using a 490-color test panel. Statistical analysis was performed in R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/. Results: Boxplots of median luminance, uniformity, and calibrated color accuracy, along with their 95% confidence intervals, are shown in [Figure 1]a, [Figure 1]b, [Figure 1]c, respectively. For all characteristics, an ideal measurement is close to 0. Neither one display nor one class of display (MG, PG, or COTS) performed the “best.” PG displays frequently matched or exceeded MG displays in the different measurement categories, while COTS displays fared generally worse than MG displays in luminance and equivalent in color accuracy but were not significantly different from MG displays in uniformity. Conclusions: MG display's performance, while competitive, was at times matched or exceeded by non-MG display's. Further evaluation of these technical criteria in multiple displays, in addition to how these factors affect pathologists' performance in a clinical setting, is warranted.
|Figure 1: Technical performance of FDA-cleared medical-grade displays versus consumer-off-the-shelf and professional-grade displays|
Click here to view
| How a Crisis Creates Regulatory Opportunities|| |
Joachim H. Schmid1, Esther Abels2
1Digital Pathology R&D, Roche Tissue Diagnostics, Santa Clara, CA, USA, 2Visiopharm, Hoersholm, Denmark.
E-mail: [email protected]
Background: The Regulatory and Standards Taskforce (R&S TF) has the mission to advance digital pathology by bringing clarity to the regulatory pathway for digital pathology including its evolution and creating awareness thereof and working toward the development and adoption of standards as well as promoting interoperability in digital pathology for clinical use. Collaboration between Digital Pathology Association and Alliance for Digital Pathology: Both aim to move digital pathology forward and ultimately improve patient care. One would wonder what the differences between the two are, and we would further explain the differences and how these are complementary to each other. The R&S TF has its role in the competitive space and focuses mainly on (1) developing general principles for regulatory clearance, (2) creating clarity around regulations for digital pathology devices, and (3) aligning around standards such as Digital Imaging and Communications in Medicine (DICOM) and color, to facilitate regulatory pathways for medical devices. The alliance has its role in the precompetitive space and focuses mainly on regulatory science, with the aim to (1) account for the patient perspective by including patient advocacy, (2) investigate and develop methods and tools for the evaluation of effectiveness, safety, and quality to specify risks and benefits in the precompetitive phase, (3) help strategize the sequence of clinically meaningful deliverables, (4) encourage and streamline the development of ground-truth data sets for machine learning model development and validation, and (5) clarify regulatory pathways by investigating relevant regulatory science questions. Opportunities for Digital Pathology Association and Alliance to Collaborate on Medical Device Development Tools: Medical device development tool (MDDT) can be used in the development and evaluation of medical devices. The MDDT program provides benefits such as (1) increase predictability in device evaluation and regulatory decision-making; (2) efficiency because multiple sponsors can use it for their device to show safety and effectiveness; (3) bridging the gap between R&D using regulatory science; and (4) collaboration in a noncompetitive setting where parties could collaborate to expedite development, validation, and use of an MDDT. Accomplishments and Roadmap: Standardization plays a critical role in the development of artificial intelligence (AI) algorithms, such as image standards and displays. We are focusing on supporting ongoing standardization efforts and participate in integrating the healthcare enterprise and DICOM Working Group 26 (WG26) as well as the International Electrotechnical Commission WG 51 (medical image display systems, evaluation methods, and on-site testing). In addition, we actively contribute to the first Virtual Connectathon at Pathology Visions 2020 which leads to improved understanding of standards. General Principles for Verification and Validation in Modification to a Software as Medical Device based on Artificial Intelligence: Modification to a Software as Medical Device introduces a new risk that could cause harm or requires a new control measure to prevent harm requiring a new premarket submission in the USA. Standards such as DICOM as well as how specifications of the original input cleared/authorized/approved device are described could improve regulatory process. Remote Sign-Out using Whole-Slide Imaging Devices: With the current COVID-19 pandemic, CMS has issued a waiver that allows for remote sign-out of cases and the FDA has provided guidance that enables remote use of digital pathology systems not cleared for this type of use. The DPA has the ultimate goal to use Real-World Date (1) to support the practice of remote sign-out beyond the pandemic and (2) to change FDA's guidance “Technical Performance Assessments” to allow an open system approach. The DPA has executed a survey to describe the current landscape of WSI remote use. As a next step, a clinical study is a set-up to generate valid Real-World Evidence on the performance of remote case sign-out in the clinical setting. The data will be analyzed on concordance between the remote sign-out diagnosis and a subsequent quality control second read.
| References|| |
- Marble HD, Huang R, Dudgeon SN, Lowe A, Herrmann MD, Blakely S, et al. A regulatory science initiative to harmonize and standardize digital pathology and machine learning processes to speed up clinical innovation to patients. J Pathol Inform 2020;11:22.
- Qualification of Medical Device Development Tools Guidance for Industry, Tool Developers, and Food and Drug Administration Staff Document issued on: August 10, U.S. Department of Health and Human Services Food and Drug Administration Center for Devices and Radiological Health; 2017.
- Center for Clinical Standards and Quality/Survey & Certification Group Memorandum; 26 March, 2020.
- Enforcement Policy for Remote Digital Pathology Devices during the Coronavirus Disease 2019 (COVID-19) Public Health Emergency Guidance for Industry, Clinical Laboratories, Healthcare Facilities, Pathologists, and Food and Drug Administration; April, 2020.
| Decoding Heterogeneous Tumor Magnetic Resonance Signals: Magnetic Resonance Histology and Cytometric Feature Mapping Connect Two-Dimensional Pathology and in vivo Magnetic Resonance Imaging of Murine Sarcomas|| |
Stephanie J. Blocker1, James Cook1, Yvonne M. Mowery2, Jeffrey I. Everitt3, Yi Qi1, Kathryn Hornburg1, Gary P. Cofer1, Fernando Zapata1, Alex M. Bassil2, Cristian T. Badea1, David G. Kirsch2, G. Allan Johnson1
1Department of Radiology, Duke University Medical Center, Durham, North Carolina, USA, 2Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina, USA, 3Department of Pathology, Duke University Medical Center, Durham, North Carolina, USA.
E-mail: [email protected]
Background: A critical barrier that faces tumor magnetic resonance (MR) imaging is the disconnect between in vivo imaging and histological “gold standards” of assessment. This study details the construction of a pipeline for registration of in vivo MR, ex vivo MR, and pathology slides, as well as novel methods for correlative studies of histological features and MR signal. Methods: In a pilot preclinical study of genetically-engineered murine soft tissue sarcomas (n = 10), multicontrast in vivo MR images of the tumor-bearing hind limbs were acquired, followed by ex vivo MR histology (MRH) of the fixed tissue. Paraffin-embedded limb cross-sections were stained with hematoxylin and eosin and digitized and registered to MR images. Quantitative maps of cytometric features from histology slides were derived using a multistep nuclear segmentation protocol and directly compared to registered MR images. Results: The superior spatial resolution of MRH images (50 μm, isotropic) provided fine structural detail, which was crucial for registering in vivo MR images (100 μm2 in-plane, nonisotropic) and histological images (0.25 μm2, single-plane). Automated nuclear segmentation (>600,000 nuclei/section) resulted in maps of 48 nuclear and cytometric features. Automated correlative studies of feature maps and registered MR images delineated relationships between histological features and MR signal, including apparent diffusion coefficient and T2* maps. Conclusions: We have constructed infrastructure for registering and quantitatively comparing in vivo tumor MR with traditional histopathology. In doing so, we have created a platform for study scale experiments to elucidate the tissue properties which define heterogeneous tumor MR signal.
| Going Beyond Sight, Revealing Multiplex Biology|| |
Jared K. Burks1
1Department of Leukemia, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.
E-mail: [email protected]
Background: Multiplexed tissue imaging supporting a comprehensive understanding of the cellular organization is the new normal and reveals the biology of the tissue ecosystem, reveals opportunistic disease progression, and helps explain successful or failed drug treatments in creating and supporting personalized medicine. Methods: Digital pathology is a required component for these applications and opens the door to standardized quantitative methods of these tissues. This is a massive expansion to the field of pathology, the science of the causes and effects of diseases. This expansion is modernizing pathology and incorporating immunology and spatial and higher-order mathematics, all while being rooted in the world of histotechnology. Conclusions: Why is any of this important and is it worth the promise? What are the multiplexed options and how do they compare to the nonimaging alternatives? What can really be accomplished with these technologies and how complicated are they? Ultimately, what does it all mean, why is it worth the effort, and where are the pitfalls? By utilizing digital pathology data, one can choose the technology or assay that is most suited for the research question, and then, utilizing classical histotechnology methods brings these data together to answer robust and unique medical questions. The human eye limitations prohibit it from accomplishing what is possible digitally once one is aware of what is possible and understands the unique nature of cellular organization.
| Pixel-Wise Comparison of Whole-Slide Imaging Viewers|| |
Wei-Chung Cheng1, Samuel Lam1, Jocelyn Liu1, Paul Lemaillet1
1Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, USA. E-mail: [email protected]
Background: The whole-slide imaging (WSI) viewer is the digital component of a WSI system that first interprets the WSI file generated by the scanner component and then reproduces the image on the display component. Since WSI viewers operate exclusively in the digital domain, showing equivalence of a third-party WSI viewer to the reference is the easiest pathway if identical images can be reproduced at the pixel level. Comparing images at the pixel level requires accurate image registration within the viewer application. When the registration is done by a human controlling the graphical user interface, the tedious task can be time-consuming and therefore limit the sample size required for regulatory purposes. Methods: In this study, a software tool was developed for determining the pixel-level equivalence between two viewers. Given two viewers opening the same digital slide, the tool automatically registers the image pair by a series of zooming and panning operations conducted by sending keyboard and mouse events to the graphical user interface. The differences between the two registered images are then computed for each pixel. Experiments were conducted to test three freely available third-party WSI viewers with 100 regions of interest. Results: The results show that some WSI viewers exhibit greater discrepancies, which originate from not only the color processing and image compression pipeline but also the tile stitching algorithm. Conclusions: The tool was demonstrated to be useful for measuring pixel-level equivalence between different viewer products and also detecting programming errors in the development phase.
| Artificial Intelligence 102: Drowning in Data|| |
Stanley Cohen1, 2, 3
1Department of Pathology and Center for Computational Pathology (Emeritus), Rutgers, Newark, NJ, USA, 2Department of Pathology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA, 3Department of Pathology, Kimmel School of Medicine, Jefferson University, Philadelphia, PA, USA.
-mail: [email protected]
Background: It has become increasingly clear that the need for huge datasets represents a bottleneck for applications of artificial intelligence, such as distinguishing between classes. Another problem is that, in most cases, there are substantially fewer annotated target lesions than normal tissues for comparison. While deep-learning algorithms have successfully tackled problems previously thought to be solely the domain of humans (thus, the term “artificial intelligence”), this dependence on massive amounts of data is one of the major ways in which computer models of the brain are deficient. Methods: The various strategies by which humans learn from small datasets are analyzed. Humans utilize large numbers of specialized neural nets arranged in both linear and parallel fashion, each solving a restricted classification problem. They rely on local error corrections such as Hebbian learning (”neurons that fire together wire together”) as compared to the nonlocal back-propagation used in most implementations of artificial neural nets. They incorporate reinforcement into their learning strategy. For these reasons, toddlers are easily capable of classifying objects after one or only a few examples. Show them a banana once and they will recognize the next example of a banana. Results: Based upon the above considerations, the currently available best strategies are found to be transfer learning, zero short learning and Siamese networks, one class models, and generative networks. In addition, these strategies provide methods for dealing with weakly annotated images as well as class imbalance. It is shown that reinforcement learning can be applied to image classification by replacing loss functions with reward functions, providing additional approaches. Conclusion: It is found that neither an extensive mathematical background nor advanced programming skills are needed to make this subject accessible to pathologists with limited background in the nuts and bolts of machine learning. However, some familiarity with the basic principles of deep learning will be helpful and will be briefly reviewed. The purpose of this presentation is to enable better communication and interaction between pathologists and computer scientists, as well as to understand the limitations of machine learning, in general.
| Machine Learning Classification of CD8-Topology on 4000 Tumor Biopsies and Resections via Artificial Intelligence-Powered Image Analysis of CD8 Immunohistochemistry Slides|| |
George Lee1, Daniel Cohen2, John Wojcik2, Benjamin Chen2, Jimena Trillo-Tinoco2, Scott Ely2, Akshita Gupta1, John Loffredo2, Kezi Unsal-Kacmaz2, Shaun O'Brien2, Mustimbo Roberts2, Megan Wind-Rotolo2, Robin Edwards2, Vipul Baxi1
1Department of Informatics and Predictive Sciences, Bristol Myers Squibb, Princeton, NJ, USA, 2Department of Translational Medicine, Bristol Myers Squibb, Princeton, NJ, USA.
E-mail: [email protected]
Background: CD8-topology is an increasingly relevant area of research shown to stratify patient outcomes in solid tumors, based on spatial CD8+ cell patterns. Understanding the role of CD8-topology in different clinical settings may present more personalized treatment options for patients. Conducting such studies in a reproducible way is challenging because manual interpretation of these complex patterns is subject to significant inter-reviewer variability. However, machine learning (artificial intelligence [AI]) can quantify CD8-topology in a biologically meaningful, reproducible, and scalable way. We have demonstrated the use of such AI methodologies to assess CD8-topology in 4162 clinical and commercial CD8 immunohistochemistry slides for melanoma (MEL), head-and-neck squamous cell carcinoma (HNSCC), and urothelial carcinoma (UC). Methods: We trained random forest AI classifiers to predict pathologist-assigned inflamed, excluded, and cold patterns on CD8 immunohistochemistry slides using parenchymal and stromal CD8 measurements from PathAI's deep-learning platform. For validation, multiple pathologists scored CD8-topology in an independent set of 140 images, and we compared pathologist–pathologist concordance with pathologist–AI concordance. Results: Data from the validation set showed a range of interpathologist concordances measured by Cohen's kappa to be k = 0.65 for MEL, k = 0.86 for HNSCC, and k = 0.57 for UC. The AI model performed similarly to pathologists, showing k = 0.79 for MEL, k = 0.66 for HNSCC, and k = 0.49 for UC. Conclusions: These results suggest that AI can accurately assess CD8-topology on multiple tumor types while avoiding interpathologist variation from manual scoring. Future work aims to leverage this capability to more efficiently study CD8-topology and its role in treatment outcomes and mechanisms of action.
| Experience of Deploying Digital Pathology in the Department of Pathology of Taipei Veterans General Hospital|| |
1Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, Taipei, Taiwan, 2Department of Pathology, Yang Ming University, Taipei, Taiwan.
E-mail: [email protected]
After the FDA cleared the Philips IntelliSite Pathology Solution for primary diagnosis, there are getting more and more innovative pathology laboratories that are working 100% with digital pathology for their current pathology workload. However, large-scale and world wide adoption may await a few remaining solutions, including next-generation scanning systems, improved viewing software, solid infrastructure, and an open versus a closed system approach. Full acceptance of the power of artificial intelligence (AI) could well be the biggest push of all. In this section, we discuss the pros and cons of transforming primary diagnosis using traditional microscopic diagnosis into digital whole-slide images and share our experience and considerations in planning for the implementation of digital whole-slide system and the AI application developing in our department. Finally, we hope that all attendants can find the best way and process for their future digital pathology practice.
| ”Success Breeds Success:” an Artificial Intelligence Approach to Reviewing Artificial Intelligence Articles in Pathology|| |
George Cernile1, Taralyn Schwering1, Yulia Borecki1, Shana Kazemlou1, Trevor Heritage2, Mark C. Lloyd3
1Department of Clinical Informatics, Inspirata Canada, Inc., Toronto, Ontario, Canada, 2Department of Clinical Informatics, Inspirata, Inc., Tampa, FL, USA, 3Department of Digital Pathology, Inspirata, Inc., Tampa, FL, USA. E-mail: [email protected]
Background: The utility of artificial intelligence (AI) in pathology is expanding at staggering rates. Natural language processing (NLP) is an ideal tool to reliably and methodically review vast volumes of data. We reviewed hundreds of thousands of relevant articles and established connections and insights between image analysis (IA) and pathology using NLP/AI technologies. Methods: Using the E-Fetch PubMed article metadata tool, we identified relevant articles with 205 pathology- and AI-related key terms. Our NLP engine was tuned to recognize terms in context with inferences to automatically abstract key clinical and IA concepts from the unstructured text in over 200,000 scholarly articles from 2000 to present and to filter articles with over 50 connections between the pathology and AI terms to filter to 74,151 reviewed articles. Results: Over 200,000 published articles revealed thousands of connections between IA and pathology concepts. Visualization of these concept connections facilitated observations of at least 10 core research focuses (i.e., immunohistochemical: breast pathology; mitoses: disease progression). Conclusions: Rapid increases in the published articles make it challenging to stay “up to date” in our ever-expanding body of medical research. Using NLP it was possible to structure the text of hundreds of thousands of articles into minable data elements in seconds. The relationships between the structured data concepts allowed the authors to establish trends. For example, how often specific clinical concepts were combined in a publication like1) HER2, algorithm and breast or; 2) CAD, pulmonary and CT. Learning when clinical concepts are combined across so many publications would not be reasonable with manual review. Furthermore, the analysis exposed existing opportunities to further apply AI to underevaluated pathological topics. We share our findings to demonstrate the progress of AI in pathology and the understudied areas, which could be the focus for on-going work.
| Augmented Reality Microscopy: Clinical Applications and Future Predictions|| |
Liron X. Pantanowitz1, Gabriel Siegel2
1Department of Anatomical Pathology, University of Michigan, Ann Arbor, MI, USA, 2Augmentiqs, Teradion, Israel.
E-mail: [email protected]
Background: Augmented reality microscopy (ARM) is a new technology that converts the conventional light microscope into a platform for digital pathology applications while avoiding the need first to photograph or digitize slides. ARM allows a pathologist to view a glass slide through the eyepiece of the microscope and simultaneously view computer-generated output, such as annotations, morphometrics, image analysis, and artificial intelligence algorithms, being projected as an overlay directly on the optical view of the glass slide. ARM further enables the functionality of pathology informatics by potentially collecting previously unattainable metadata from the microscope stage. Such metadata include objective magnification, correlating the slide label with the histopathology image, and tracking the XY movement of the glass slide on the stage. Methods: ARM was the subject of several recent studies including its use as a tool for real-time telepathology, scoring of Ki-67 neuroendocrine tumors with QuPath algorithms, quantification of nonalcoholic fatty liver disease utilizing deep-learning AI algorithms, and tracking movement of the slide on the stage during routine pathology workflow to determine best pathology practices. Results: As opposed to conventional digital pathology technologies such as whole-slide imaging, ARM is quicker to use, provides for the real-time and seamless integration of algorithms without the need to first acquire digital images, may be cheaper than high-capacity whole-slide scanners, and does not require special technical skills to operate. Conclusion: The microscope is still extensively used by pathologists and will likely maintain certain optical and workflow advantages over screen-based diagnosis. ARM can therefore fill the role of a bridging technology, which both maintains and digitizes microscope-based workflow. Looking to the future, ARM is a technology that can allow pathology laboratories to utilize both microscopy and WSI in the training and deployment of AI algorithms, synchronize diagnostic data with the laboratory information system, and automate additional mundane tasks within existing pathology workflow.
| Virtual Staining Can Catalyze the Adoption of Digital Pathology|| |
Yair Rivenson1,2, W. Dean Wallace3, Aydogan Ozcan1,4,5,6
1Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA, 2Pictor Labs, Inc., Los Angeles, CA, USA, 3Department of Pathology and Laboratory Medicine, Keck School of Medicine of USC, Los Angeles, CA, USA, 4Bioengineering Department, University of California, Los Angeles, CA, USA, 5California NanoSystems Institute, University of California, Los Angeles, CA, USA, 6Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
E-mail: [email protected]
Background: The emergence of digital pathology has created new possibilities of computer-aided diagnoses. However, digitizing pathology still has high but not directly reimbursable infrastructural costs. We wish to discuss the role that novel technological advancements in the field of virtual staining can play in catalyzing the adoption of digital pathology by disrupting the histological staining process, a mostly overlooked, yet critical, part of the histopathology workflow. Methods: Virtual staining uses deep learning to digitally stain a label-free image of a tissue section. The label-free tissue section is imaged with a contrast generating microscopic imaging technique, such as autofluorescence, quantitative phase imaging, and multispectral or bright-field imaging. The framework relies on availability of accurately registered label-free (inputs) and histologically stained images (labels) and uses a generative adversarial network to facilitate the training of a deep neural network. Results: Virtual staining from label-free tissue was demonstrated in blind studies to be equivalent of quality (or sometimes subjectively better) to standard histological staining techniques. In a second study, the same conclusion was demonstrated for diagnosing purposes by “blinded” board-certified pathologists. Conclusions: Virtual staining has many advantages, such as tissue preservation, enabling multiple (virtual or physical and virtual) stains on the same tissue section, and stain color standardization, thus eliminating intra- and inter-laboratory staining variability. Other than direct benefits to patients, the transition from chemical staining will substantially reduce histology laboratory operating costs, staff exposure to chemicals, and supply chain dependence.
| Artificial Intelligence-Driven Pathomics: The Next Diagnostic Frontier|| |
Joel H. Saltz1, Tahsin Kurc1, Dimitris Samaras2, John S. van Arnam1, Rajarsi Gupta1
1Department of Biomedical Informatics, Stony Brook Medicine, Stony Brook, NY, USA, 2Department of Computer Science, Stony Brook University, Stony Brook, NY, USA.
E-mail: [email protected]
Background: Deep-learning methods are having a profound impact on the performance efficiency and reliability of digital pathology analytic methods. The generation and use of whole-slide images (WSIs) are becoming routine and inexpensive, where we can feasibly expect that almost all biopsy and excision specimens from cancer patients will be digitized and virtually available as WSIs in the near future. Many clinical and epidemiologic studies are extensively using WSIs alongside the greater acceptance and adoption of digital pathology in clinical settings. By leveraging advances in computational power, speed, and the development of open-source convolutional network architectures, it is now possible to reliably analyze complex WSIs of cancer to characterize areas containing tumor-infiltrating lymphocytes, identify regions of different types and subtypes of tumors, and segment and classify many kinds of cells. These analytic methods are increasingly being adopted to provide nuanced information about cancer phenotype through quantitative pathology, leading to the emergence of Pathomics for clinical research and surveillance studies. Methods: We have developed a variety of open-source Pathomics tissue analytics models for WSIs that perform (1) quantification of tumor-infiltrating lymphocytes for multiple tumor types, (2) nuclear segmentation for multiple tumor types, (3) segmentation of breast, prostate, nonsmall cell lung, and pancreatic cancer, and (4) segmentation and subclassification of histologic subtypes of prostate and nonsmall cell lung cancer adenocarcinoma. We have also generated a variety of related data products.,,,, Results: Programs, models, and generated data products for our published and formally validated models (e.g., tumor infiltrating lymphocyte analysis, breast cancer tumor segmentation, and nuclear segmentation for multiple tumor types) can be found on Github https://github.com/SBU-BMI/histopathology_analysis. Additional programs, models, and datasets will be added to this site after being published and validated. Conclusions: Pathomics analyses will play increasingly pivotal roles in clinical research, epidemiologic, and surveillance studies. Publicly accessible open-source code and models capable of carrying out core Pathomics tasks have the potential to dramatically advance our understanding of human pathobiology on a large scale alongside the increasing availability of WSIs. As methods continue to emerge, validated data products can be used to refine and develop more advanced methods and ensemble models. In this abstract, we describe available open-source code and deep-learning computer vision models to perform fundamentally important tasks with Pathomics.
| References|| |
- Abousamra S, Hou L, Gupta R, Chen C, Samaras D, Kurc T, et al. Learning from thresholds: Fully automated classification of tumor infiltrating lymphocytes for multiple cancer types. ArXiv e-prints 2019. Available from: https://arxiv.org/abs/1907.03960”arXiv:1907.03960 [eess.IV].
- Hou L, Gupta R, Van Arnam JS, Zhang Y, Sivalenka K, Samaras D, et al. Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types. Sci Data 2020;7:185.
- Le H, Gupta R, Hou L, Abousamra S, Fassler D, Torre-Healy L, et al. Utilizing automated breast cancer detection to identify spatial distributions of tumor-infiltrating lymphocytes in invasive breast cancer. Am J Pathol 2020;190:1491-504.
- Le H, Samaras D, Kurc T, Gupta R, Shroyer K, Saltz J, editors. Pancreatic Cancer Detection in Whole Slide Images Using Noisy Label Annotations. Medical Image Computing and Computer Assisted Intervention – MICCAI. Cham: Springer International Publishing; 2019.
- Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, et al. Spatial organization and molecular correlation of tumor-infiltrating lymphocytes using deep learning on pathology images. Cell Rep 2018;23:181-9.e0.
| KimiaNet: Training a Histopathology Deep Network from Scratch|| |
Abtin Riasatian1, Morteza Babaie1, Danial Maleki1, Shivam Kalra1,2, Mojtaba Valipour3, Sobhan Hemati1, Manit M. Zaveri1, Amir Safarpoor1, Sobhan Shafiei1, Mehdi Afshari1, Maral Rasoolijaberi1, Milad Sikaroudi1, Mohd Adnan1, Sultaan Shah2, Charles Choi2, Savvas Damaskinos2, Clinton Campbell4, Phedias Diamandis5, Liron Pantanowitz6, Ali Ghodsi3, Hany Kashani1, H. R. Tizhoosh1,7
1Kimia Lab, University of Waterloo, Waterloo, ON, Canada, 2Huron Digital Pathology, St. Jacobs, ON, Canada, 3School of Computer Science, University of Waterloo, Waterloo, ON, Canada, 4Stem Cell and Cancer Research Institute, McMaster University, Hamilton, ON, Canada, 5Laboratory Medicine and Pathobiology, University of Toronto, ON, Canada, 6Department of Pathology, University of Pittsburgh Medical Center, PA, USA, 7Vector Institute, Toronto, ON, Canada. E-mail: [email protected]
Deep embedding, or feature vectors, provided by pretrained deep artificial neural networks, has become a dominant source for image representation in digital pathology. Their contribution to the performance of image analysis can be improved through fine-tuning. One might even train a deep network from scratch with the histopathological images, a highly desirable option that is generally impeded in pathology by lack of labeled images and the computational expense. We propose KimiaNet that employs the topology of the DenseNet with four dense blocks, fine-tuned, and trained with histopathological images in different configurations. We used more than 240,000 image patches with 1000 × 1000 pixels acquired at ×20 through our proposed “high-cellularity mosaic” approach to enable the usage of weak labels of 7126 whole-slide images of formalin-fixed paraffin-embedded human pathology samples publicly available through The Cancer Genome Atlas (TCGA) repository. We tested KimiaNet using three public datasets, namely TCGA, endometrial cancer images, and colorectal cancer images, by evaluating the performance of search and classification when corresponding features of different networks are used for image representation.
| Skip the Slide: CHiMP for Practical Digital Histology without Physical Slides|| |
1Department of Laboratory Medicine, Yale School of Medicine, New Haven, CT, USA.
E-mail: [email protected]
Background: Whole-slide imaging (WSI) provides remote viewing capabilities and enables use of image analysis tools, but adds cost and work, delays access to diagnostic information, introduces image artifacts, and does not address shortcomings of traditional processing. Interest in “slide-free” histology techniques stems from standard histological limitations: slow, labor-intensive processes, associated with artifacts, process errors, and requiring technical skill that is increasingly in short supply. However, the general perception is that “slide-free” technologies are prone to being slow or costly and critically produce inferior image quality, necessitating traditional processing for definitive diagnosis. Methods: Clearing Histology with MultiPhoton Microscopy (CHiMP) is an emerging technique for digital slide preparation using fluorescent imaging via multiphoton laser microscopy of uncut biopsies, producing hematoxylin and eosin (H and E)-like images while avoiding labor-intensive steps of standard processing. Recent advances in speed and quality have motivated clinical evaluation in a variety of tissues, including clinical studies with kidney and prostate biopsies. Results: CHiMP has produced histological images virtually indistinguishable from standard H and E slides with fully recognizable histologic features. Relative standard histological, significant improvements have been observed in processing times, labor and labor complexity, reviewable image volume, three-dimensional perspective, and artifact reduction, resulting in diagnostic interpretations that are at least as accurate as physical slide review. Conclusions: CHiMP shows viability as a replacement of physical slides, providing primary diagnostic quality images with significant time and labor efficiency benefits, potentiating a much more efficient intrinsically-digital workflow. Clinical data to date illustrate how to achieve practical implementation of this new approach.
| COVID 19: The UK Response|| |
Bethany Jill Williams1,2
1Northern Pathology Imaging Co-Operative, Leeds Teaching Hospitals NHS Trust, Leeds, UK, 2University of Leeds, Leeds, UK.
E-mail: [email protected]
The COVID-19 pandemic has forced pathologists and laboratory professionals to consider working in new ways to meet workforce challenges. In this session, I outline and discuss the national guidance on emergency remote digital reporting published by the Royal College of Pathologists in response to the pandemic. This includes an approach to risk assessment of the home reporting of digital slides and a number of practical resources, including a point-of-use quality assurance test for display screens.
| Digital Pathology in Italy - Pre, During, and After COVID|| |
1Department of Pathology, Cannizzaro Hospital, Catania, Italy.
E-mail: [email protected]
In this presentation, the benefits of the digital pathology (DP) workflow will be highlighted: lean approach, barcoding, and tracking are the words that pathologists should learn to implement the digital workflow. The possibility for remote working using secure VPN connection allowed us to manage cases and render diagnosis even during the stressing COVID period. The post-COVID DP workflow is characterized by the possibility to access to cases from everywhere and at any time with the possibility of using artificial intelligence tool. Suggestions how to implement a DP workflow, guidelines, and regulatory problems will be addressed.
| Current Condition of Digital and Artificial Intelligence Pathology in Japan|| |
Junya Fukuoka1,2, Yasunari Tsuchihashi3, Tetsuya Tsukamoto4, Ichiro Mori5
1Department of Pathology, Nagasaki University Graduate School of Biomedical Sciences, Nagasaki, Japan, 2Department of Pathology, Kameda Medical Center, Kamogawa, Japan, 3Department of Clinical Pathology Research, Louis Pasteur Center for Medical Research, Kyoto, Japan, 4Department of Pathology, Fujita Health University, Toyoake, Japan, 5Department of Pathology, International University of Health and Welfare, Narita, Japan.
E-mail: [email protected]
The following four topics were discussed at the meeting. First, the Japanese Society of Digital Pathology (JSDP) has 19 years of history. The organization is composed of pathologists, data scientists, IT engineers, vendors, pathology technicians, and other academic members who are interested in digital pathology. JSDP serves as a companion organization of the Japanese Society of Pathology (JSP), and all pathology members belong to JSP. During JSDP's history, we have published several white papers and a guideline draft in the area of telepathology, digital pathology, and artificial intelligence (AI). The second topic was the current status of digital pathology under the COVID-19 pandemic in Japan. The effect of COVID-19 in Japan has been moderate compared to US/EU countries. In the big cities such as capital Tokyo, the effect of a case number for both histology and cytology was large, and there was a significant decrease of cases from early March to late June; however, rural area has not been affected much and remote diagnosis of the rural area has not to change case volume compared to the last year. The third topic was the current status of AI in pathology in Japan. Currently, to establish pathology – AI, there are four major groups such as the Cabinet Office of the Japanese Government, JSP, RIKEN, and NEDO. They aim to create AI-driven hospitals, databases and platforms, explainable models, and collaborative models between human and AI models. As a use case in Japan, a digital pathology network initiated by Nagasaki University was introduced. The successful creation of diagnostic and educational networks among 15 institutes throughout Japan was introduced, and it was emphasized that education has been the key to success. AI has been implemented into clinical practice in that network for the scaling of tumor cellularity and the detection of mycobacteria.
| Poster Abstracts|| |
| Whole-Slide Imaging Remote Access: single-Center Residents' Experience|| |
Rand Abou Shaar1, Ifeoma N. Onwubiko1, Ashish Mishra1, J. Mark Tuthill1
1Department of Pathology and Laboratory Medicine, Henry Ford Health System, Detroit, MI, USA.
E-mail: [email protected]
Background: The advent of whole-slide imaging (WSI) scanners in recent years has propelled digital pathology to a new level and helped pathologists store, view, and share slides more easily and efficiently. Henry Ford Health System's Pathology and Lab Medicine Department has been a pioneer in this realm and is continually integrating more digital pathology tools as part of its lean culture. The Department successfully utilized digital pathology and WSI in multidisciplinary tumor boards, off-site intraoperative and intradepartmental consultations, and performance improvement program presentations, researches, and didactics such as digital gross and unknown conferences. This study investigates how WSI remote access impacts pathology residents' onsite work hours, time management, and well-being. Methods: We surveyed all residents (n = 14) before granting them VPN enabled remote access to WSI, and we followed up with another survey 1 year later. Results: Initial survey showed that 100% of residents used onsite WSI. All residents predicted having remote access to be beneficial, as 79% of them stayed after hours to review slides for unknown conferences (79%) and tumor boards' presentations (64%). One-year follow-up survey showed that 57% of the residents (n = 8) knew how to scan slides and 64% found that knowledge helpful. Over the past year, 71% (n = 10) accessed WSI remotely and 69% found it helpful. The remote access was for reviewing performance improvement program, unknowns, and tumor boards' slides and research. When asked about the process, 21% felt that it was complicated [main issues in Figure 1] and 57% felt that additional training and incorporation of WSI in daily activities would be helpful. Conclusion: Granting remote access to WSI improves residents' time management and well-being by reducing onsite time spent for reviewing and assigning scanned slides. Constant training is needed to harness those benefits. Based on our resident group experience, we propose to have a written protocol with pictures to provide residents with step-by-step training on how to scan and assign scanned slides; access them remotely and troubleshoot issues; improve quality of images, storage space, and duration to encourage residents to use this technology to build a virtual library; and provide additional opportunities for WSI integration.
|Figure 1: Whole-slide imaging remote access: Single-center residents' experience|
Click here to view
| Computer-Assisted Approach to Improve the Detection of Tall Cell Variant of Papillary Thyroid Carcinoma|| |
Asmaa M. Aljuhani1, James P. Cronin2, Bradley Zehr3, Raghu Machiraju1, Juan C. Hernandez-Prera4, Anil Parwani3
1Department of Computer Science and Engineering, The Ohio State University, Columbus, OH, USA, 2Department of Veterinary Biosciences, College of Veterinary Medicine, The Ohio State University, Columbus, OH, USA, 3Department of Pathology, College of Medicine, The Ohio State University, Columbus, OH, USA, 4Moffitt Cancer Center, Anatomic Pathology, University of South Florida, Tampa, FL, USA.
E-mail: [email protected]
Background: Tall cell variant (TCV) is an aggressive subtype of papillary thyroid carcinoma (PTC). In 1976, TCV was defined as PTC predominantly composed of cells whose height is at least twice their width, called tall cells. The inherent subjectivity of TCV diagnoses and inter-observer variability allow deep-learning methods to classify TCV objectively. In this work, we develop a high-level TCV classifier as an initial step to provide an objective TCV diagnosis and enable robust measurement of cell morphology. Methods: We utilized eight TCV and eight classical PTC hematoxylin and eosin-stained whole-slide images (WSIs) to build a classifier model using deep neural networks (DNNs) inspired by Resnet50. This model focuses on tiles that are relevant to TCV morphology features by automatically evaluating individual tall cells. We can associate the percentage of tall cells to tumor regions to predict the case label for a given slide. Results: The model is successfully able to predict tumor versus normal at ×10 with 92% accuracy and tall cell versus not tall cell features at ×40 with 75% accuracy. Further assessment was performed on high-magnification tiles to extract morphological features of individual tall cells. These features discriminate tiles with tall cells from other cells [Figure 1]. Conclusions: We propose a framework that employs a combination of a DNN classifier and automatic morphological assessment to create a potential morphological signature for TCV. The success of this model potentially provides a more objective method for an interpretable classification of TCV.
|Figure 1: A combination of two deep neural networks and an automated morphological assessment to classify tall cell variant whole-slide images. The first deep neural network is to classify tumors and normal tiles at ×10. Then, tumor tiles are passed to the second deep neural network to detect tall cell feature tiles at ×40. Morphology features for individual cells on tall cell variant tiles are extracted|
Click here to view
| References|| |
- Hawk WA, Hazard JB. The many appearances of papillary carcinoma of the thyroid. Cleve Clin Q 1976;43:207-15.
- He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. p. 770-8. doi: 10.1109/CVPR.2016.90.
| National Pathology Imaging Co-operative: scaling-Up Digital Pathology and Transforming the UKs Diagnostic Industry through Artificial Intelligence|| |
Daljeet Bansal1, Basharat Hussain1, David Brettle1, Bethany Williams1, David Hogg2, Darren Treanor1, 2, 3, 4
1Leeds Teaching Hospitals NHS Trust, Leeds, UK, 2University of Leeds, Leeds, UK, 3Department of Clinical Pathology, Linköping University, Linköping, Sweden, 4Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden.
E-mail: [email protected]
Background: National Pathology Imaging Co-operative (NPIC) is an NHS program for the development and evaluation of digital pathology and the use of artificial intelligence (AI) to speed up the diagnosis of diseases such as cancer. The program is led by Leeds Teaching Hospitals NHS Trust and includes a network of nine NHS hospitals, seven universities, and ten industry partners including Leica, Roche, and Sectra. NPIC is part of a UK-wide network of Digital Pathology and Imaging AI Centres of Excellence, supported by £37 m of government funding and £11 m industry contribution. Methods: The program will consider the end-to-end process of AI development including deployment of digital pathology, data ethics, data sharing practices, patient and public engagement activities, interoperability and standards (Digital Imaging and Communications in Medicine [DICOM]), quality assurance, clinical validation, and clinical exemplars for the real-world evaluation of digital pathology. Results: NPIC has established an infrastructure in the North of England, created over 25 new jobs and working with over a number of new partners. The program is deploying digital pathology across NHS laboratories in the North of England; at Leeds Teaching Hospitals, 100% of glass slides are scanned and available for primary diagnosis with 1400 glass slides and 1 TB image data per day. The new generation of Leica GT450 scanners has been installed. Finally, progress has been made with region-wide capturing of COSD synoptic cancer reports (mTuitive), National Pathology Exchange system, and migration to regional Sectra PACS solution. A further five sites will go-live by 2021 and remaining sites will be fully digitized by the end of 2023. A number of publications to highlight the benefits of digital pathology are available, and further works in the evaluation of quality assurance (QA) tools-scanner calibration tools, QA display tools, stain quantification, DICOM conformance specification, and testing are in progress. Recruitment of an expert PPI panel, a literature review of clinical adoption of AI, and a project initiation for clinical exemplar in lung, breast, and skin cancer have commenced. A new international training center will be opened by 2021. NPIC at scale will see a rapid deployment of scanners across over 40 hospitals, including all 29 pathology networks in the whole country, fully digitizing cases over a population of 6 million people, scanning over 2.4 million images/3 petabytes of data per year in a single-vendor neutral archive. Conclusions: NPIC builds on the existing fully digital laboratory at Leeds and will create a globally leading infrastructure for digital pathology and AI. The AI technologies, being developed through the training of algorithms on diagnostic data, will provide further patient benefits, such as streamlining the diagnostic process and providing a platform for further research and innovation.
| Deep Learning Identified an Imaging Biomarker of Granuloma Necrosis That Is under Distinct Genetic Control in Experimental Infection with Mycobacterium tuberculosis|| |
Daniel M. Gatti1, Thomas E. Tavolara2, M. Khalid Khan Niazi2, Melanie Ginese3, Cesar Piedra-Mora3, Metin N. Gurcan2, Gillian Beamer4
1The College of the Atlantic, Bar Harbor, ME, USA, 2Center for Biomedical Informatics Wake Forest School of Medicine, Winston-Salem, NC, USA 3Department of Biomedical Sciences, Cummings School of Veterinary Medicine, Tufts University, North Grafton, Massachusetts, USA, 4Department of Infectious Disease and Global Health, Cummings School of Veterinary Medicine, Tufts University, North Grafton, Massachusetts, USA.
E-mail: [email protected]
Identifying which individuals will develop tuberculosis (TB) remains a problem due to limited animal models and computational approaches. To meet these needs, we show that diversity-outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb). This led us to apply deep learning to identify supersusceptibility from hematoxylin and eosin lung sections using only clinical outcomes (supersusceptible or not-supersusceptible) as class labels. We developed, validated, and tested a multiple instance learning model that diagnosed supersusceptibility with high accuracy (91.50% ± 4.68%) similar to the results by two expert pathologists (94.95% and 94.58%). Two board-certified veterinary pathologists (GB) examined the imaging biomarker and determined the model was making decisions using karyorrhectic and pyknotic nuclear debris. Finally, the imaging biomarker was quantified in lung sections from >400 DO mice and data input for genetic mapping. This identified unique locations on the genome significantly associated with the imaging biomarker, i.e., granuloma necrosis. Preliminary inspection of the gene regions suggests that granuloma necrosis may be controlled by genes in the region of the mouse major histocompatibility locus. Overall, these findings provide a novel means to convert visual patterns into data suitable for rigorous statistics. This is a major step forward for TB research as granuloma necrosis cannot be accurately quantified by pathologists, and there are no validated or consensus surrogate biomarkers of granuloma necrosis that can be substituted for this important morphological tissue change.
| Fractal Compression: An Old Technique for the New Challenge of Whole-Slide Imaging Storage|| |
Anthony B. Cardillo1
1Department of Pathology and Laboratory Medicine, University of Rochester Medical Center, Rochester, New York, USA.
E-mail: [email protected]
Background: Fractal image compression is a lossy image compression algorithm based on the idea that images often contain self-similarity. Fractal image compression can achieve extremely high compression ratios beyond its main competitor format, JPEG – at the cost of high computational complexity during the compression stage. Whole-slide images (WSIs) present a massive storage challenge – a single slide is represented on the order of gigabytes. It would be favorable to trade off this large storage requirement for a one-time complex computational compression algorithm. The computational complexity of fractal compression is still out of reach for most desktop computers in 2020. As a surrogate, tiled histology from WSIs can act as a proof of concept to determine the self-similarity and compression ratios that could be achieved in histology. Methods: A single 1024 × 1024 pixel tile from an uncompressed WSI of the lung alveoli was exported into a lossless PNG file. Compression was performed through three pipelines: JPEG at 90% quality, JPEG at 70%, and custom fractal code through a custom fractal compressor. The final size of all files was recorded. Results: The lossless compressed PNG file (1.05 megapixels) took 2708 kilobytes of memory. The lossy JPEG file at 90% quality took 569 kilobytes of memory. At 70% quality, the JPEG file took 303 kilobytes of memory. The compressed image saved as optimized fractal code (*.ff) was 164 kilobytes, ×3.5 smaller than the JPEG file at 90% quality with no subjective loss in histologic detail. In addition, it was ×1.8 smaller than the JPEG file at 70% quality. Conclusion: Fractal compression may find a strong use-case in storage of WSIs – where the one-off trade for computational complexity results in massive reduction of storage size.
| Real-Time, Point-of-Care Pathology Diagnosis via Embedded Deep Learning|| |
Bowen Chen1, Max Nadeau1, Ming Y. Lu2, Jana Lipkova2, Faisal Mahmood2
1Department of Computer Science, Harvard University, Cambridge, MA, USA, 2Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA.
E-mail: [email protected]
There is an urgent need for widespread cancer diagnosis in low-resource settings, especially in contrast to areas with developed healthcare systems. According to a study in The Lancet, in the U.S., there is one pathologist for every 20,000 individuals, while in Sub-Saharan Africa, there is only one for every million. In addition, current telepathology system for cancer diagnosis mostly relies on pathologists performing remotely, which is low-throughput and requires more time and resources. With the growth of telepathology, remote diagnosis becomes a viable solution to address the lack of skilled pathologists in developing regions. Here, we present a cost-efficient device that incorporates embedded deep learning to achieve real-time, point-of-care diagnosis of whole pathological slides. We achieve this with a low-cost, three-dimensional printable microscope that uses the Raspberry Pi camera module to capture high-resolution images of slides up to ×200. Then, using a weakly-supervised deep-learning model run on the inexpensive NVIDIA Jetson Nano, the device is able to accurately classify the whole slide without any pixel-level annotations. Furthermore, the model's attention-based approach to diagnosis allows us to generate human-interpretable heatmaps displaying the regions the most influential to the model's diagnosis. Our device also incorporates a touch screen and batteries to increase accessibility as an easy-to-use and low maintenance device while still maintaining an efficient runtime. Overall, we demonstrate that the device is capable of achieving accurate, high-throughput, and interpretable cancer diagnoses in low-resource settings.
| Predicting p16 Immunohistochemical Staining in Cervical Biopsies on Hematoxylin and Eosin Images using a Deep-Learning Algorithm|| |
Robin Dietz1, Jianyu Rao1
1Department of Pathology and Laboratory Medicine, University of California, Los Angeles, CA, USA.
E-mail: [email protected]
Background: Immunohistochemical staining for p16 protein is frequently used in cervical biopsies. Block-positive p16 staining suggests prior infection with human papillomavirus (HPV) and precancerous potential. We sought to train a deep-learning algorithm to predict p16 staining in cervical biopsies using hematoxylin and eosin (H and E) tiles extracted from digital whole-slide images. Methods: 44 slides were scanned at ×40 magnification using an Aperio AT2 scanner. H and E and p16 whole-slide images were aligned using QuPath software. Epithelial regions on the H and E slides were correspondingly annotated as positive or negative with p16 immunostaining as ground-truth. Thirty-eight slides (73,817 image tiles) were used in the training and validation sets of a 10-layer neural network run for 50 epochs using GPUs in Google Colab. Six slides (11,217 tiles) were held out as a test set on original images [Figure 1]. Results: The model achieved 95.7% accuracy in the training set and 93.7% in the validation set. The testing set with six original images achieved an area under receiver operating characteristic of 0.75 [Figure 2]. The optimal threshold value was very low at 0.01 (out of 1). The test slide with the lowest accuracy (0% accuracy) was from a case of koilocytic atypia and the highest accuracy (91% accuracy) from a benign transformation zone. Conclusions: The model performed decently on the test slide set. The optimal performance threshold was surprisingly low at 0.01. However, the testing set was limited by too few positive cases. Training the model on more images and using different model architectures would likely improve accuracy. This study explored the possibility of using a deep-learning model to predict p16 immunohistochemical staining on digital H and E images.
|Figure 1: (a) (Left) Squamous epithelium correctly predicted as negative; (Right) Corresponding negative p16 stain. (b) (Left) Hematoxylin and eosin image correctly predicted as positive; (Right) Corresponding positive p16 stain|
Click here to view
|Figure 2: Area under receiver operating characteristic from the test set of 6 images was 0.75|
Click here to view
| References|| |
- Bankhead P, Loughrey MB, Fernández JA, Dombrowski Y, McArt DG, Dunne PD, et al. QuPath: Open source software for digital pathology image analysis. Sci Rep 2017;7:16878.
- Chollet F. Code examples/Computer Vision/ Image Classification from Scratch. Available from: https://keras.io/examples/vision/image_classification_from_scratch/Date. [Last accessed on 2020 Sep 29].
| COVID Challenges, Digital Solutions|| |
Christopher B. Girardo1, Guang Li2, Richard S. Vander Heide1, Sharon E. Fox1,3
1Department of Pathology, Louisiana State University Health Sciences Center, New Orleans, LA, USA, 2Department of Biomedical Engineering, Tulane University, New Orleans, LA, USA, 3Pathology and Laboratory Medicine Service, Southeast Louisiana Veterans Healthcare System, New Orleans, LA, USA.
E-mail: [email protected]
Background: The COVID-19 pandemic has presented many challenges to pathologists but has also become an impetus for innovation in the use of digital pathology tools. The benefits of digital pathology for distance education are tremendous, and such tools have additionally improved upon our reporting capabilities on over 30 autopsy cases of deaths due to COVID-19 infection – thus modernizing one of the oldest methods of analyzing the pathologic basis of disease. Methods: Digital pathology was applied to three domains of the anatomic pathology services at the onset of the COVID-19 pandemic shutdown at our institutions in New Orleans: (1) pathology education, (2) surgical pathology sign-out, and (3) COVID-19–related autopsy research. Implementations included the use of whole-slide scanners (Leica) and online repositories, along with PathPresenter for conferences. Live sign-out services adopted the Olympus Corporation, Shinjuku City, Tokyo, Japan (Olympus Corporation of America, Center Valley, Pennsylvania, USA), Zoom, San Jose, California, USA, with Zoom conferencing. Existing image analysis algorithms, as well as multiscale microscopy using tissue clearing methods, were employed to study the nature of SARS-CoV-2 infection at autopsy. Results: Safe distance-learning objectives were achieved without disruption to resident education due to the implementation and adaptation of digital solutions. Existing image algorithms were tuned to analyze data from COVID-19 tissue samples, and the first three-dimensional images of unsectioned lung from a COVID-19 patient were obtained, providing unique insights into the disease process. Conclusions: Digital pathology tools have been rapidly adopted for both routine and academic use during the COVID-19 pandemic. These methods offer practical solutions to both the altered workflow and the study of SARS-CoV-2 infection by pathologists.
| Artificial Intelligence Estimation of Gestational Age Based on Microscopic Appearance of Placental Villi using a Deep-Learning Attention and Aggregation Approach|| |
Pooya Mobadersany1,2, Lee A. D. Cooper1,3, Jeffery A. Goldstein1
1Department of Pathology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA, 2Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA, 3McCormick School of Engineering, Northwestern University, Evanston, IL, USA.
E-mail: [email protected]
Background: The placenta is the first organ to form and performs the functions of the lung, gut, kidney, and endocrine systems. Abnormalities in the placenta are the cause of or reflect most abnormalities in gestation and can have lifelong consequences for the mother and infant. Placental villi undergo a complex, but reproducible sequence of maturation across the first trimester. Abnormalities of villous maturation are a feature of gestational diabetes and preeclampsia, among others, but show significant interobserver variability. Methods: Machine learning has emerged as a powerful tool for research in pathology. To overcome and take advantage of the heterogeneity within the placenta, we developed GestaltNet which emulates human attention to high-yield areas and aggregation across regions. We used this network to estimate the gestational age (GA) of scanned placental slides and compared it to a similar network lacking the attention and aggregation functions. Results: In the test set, GestaltNet showed a higher r2 (0.9444 vs. 0.9220) than the usual network. The mean average error (MAE) between the estimated and actual GA was also better in the GestaltNet (1.0847 vs. 1.4505 weeks). On whole-slide images, we found the attention sub-network discriminates areas of terminal villi from other placental structures. Using this behavior, we estimated GA for 36 whole slides not previously seen by the model. In this task, similar to that faced by human pathologists, the model showed an r2 of 0.8859 with an MAE of 1.3671 weeks. Conclusions: We show that villous maturation is machine recognizable. Machine-estimated GA is useful when GA is unknown or to study the abnormalities of villous maturation, including those in gestational diabetes or preeclampsia. GestaltNet, by incorporating human-like behaviors of attention and aggregation, points toward a future of truly whole-slide digital pathology.
| Building a Low-Cost Whole-Slide Imaging System in a Basic Research Laboratory: Lessons and Successes|| |
Rama Gullapalli1,2, Aparna Gullapalli1, Aysha Mubeen1, Anil V. Parwani3
1Department of Pathology, University of New Mexico, Albuquerque, NM, USA, 2 Department of Chemical and Biological Engineering, University of New Mexico, Albuquerque, NM, USA, 3 Department of Pathology, Ohio State University, Cincinnati, OH, USA.
E-mail: [email protected]
Background: There is increasing interest in merging molecular “omic” data (e.g., DNA mutations) with histopathological findings. Whole-slide imaging (WSI) systems enable such integrative studies. However, commercial WSI solutions are highly expensive. We custom-built a WSI station in our laboratory using low-cost hardware to enable an end-to-end digital pathology (DP) workflow. Methods: Major components of our custom-built WSI workstation are (a) microscope: We used a CH30 Olympus Microscope; (b) camera: We used a Point Grey GS3-PGE-23S6C-C Grasshopper3 GigE camera with a Sony IMX174 1/1.2” chip; (c) hardware: We built an Intel Core i7-9400 CPU Linux desktop with 24 GB RAM memory and a GTX 1070 NVidia graphics card; (d) software: We used MATLAB to implement the SIFT-RANSAC image mosaic algorithm; (e) WSI storage: Open-source OMERO setup was used to store the WSI images in a 2TB NAS backup. Results: Our WSI imaging system is shown in [Figure 1]. We built the WSI setup at a low price point (~$2500). MATLAB software and the homography relating RANSAC-SIFT image algorithm were used to create WSI mosaics in the DP workflow. Our workflow enables manual capture, global alignment, image blending, and local adjustment to create the final WSI mosaic. The OMERO software is a convenient way to manage and view WSI composites on demand. Conclusions: We built a low-cost DP workflow in our research laboratory. Manual WSI workflows are feasible with increasingly cheap optical components at a low price point.
| Convolutional Neural Network-Based Mitosis Detection for Cancer Graduation|| |
S. Ben Hadj1, M. Bellot1, T. Rey, N. Benzerdjeb2, P. Dartigues3, I. Villa3, J. F. Pomerol1, P. Baldo1
1TRIBVN Healthcare, Rue Louveau, Châtillon, France, 2Department of Pathology, Lyon University Hospital, Pierre Bénite, France, 3Department of Pathology, Gustave Roussy Institute, Villejuif, France.
E-mail: [email protected]
Introduction: The mitotic index is one of the main components for determining the histoprognostic of cancer in many organs, namely breast cancer and peritoneal mesothelioma considered in this study. The conventional method of manually counting mitosis in histological images is tedious and subject to inter- and intra-observer variability. The automatic detection of mitosis is a promising solution but presents several algorithmic and numerical challenges, mainly because of the colorimetric, morphological, and structural variability of these cells. This variability is intrinsic to the biological development of mitosis. Each mitotic phase (prophase, metaphase, anaphase, and telophase) gives rise to a different shape of the nuclei. For example, in telophase, a mitotic cell has two distinct nuclei completely separated but should be counted as single mitosis. To this, structural variability is added the colorimetric variability introduced by the image acquisition process, including sample cutting, staining, and digitalization. Materials and Methods: We propose a high-precision method using two consecutive neural networks, the first aims at segmenting all cells in the image and the second step aims at sorting cells into two categories – mitosis/nonmitosis. The analysis is therefore performed in two steps. Cell Segmentation Step: First, we apply a stain normalization method to reduce staining variability between slides. We then use a particular neural network architecture called U-net for cell segmentation. We use a U-net version inspired from the article for its ability to differentiate very heterogeneous cells in clusters by combining different objective functions taking into account different segmentation components such as the cell centers and the horizontal and vertical gradients. We train the model on an open dataset containing 30 images with over 22,000 annotated cells. Cell Classification Step: Each cell detected by the previous step is extracted in a patch of 64 × 64 pixels at a magnification of ×40. A binary classification model such as Resnet50 pretrained on ImageNet was readjusted on a base containing more than 15,000 cells with 3492 mitotic ones extracted from different samples (breast cancer and peritoneal mesothelioma). About 588 mitoses were annotated by the Gustave Roussy Institute, France, in predefined rectangular areas in peritoneal mesothelioma slides. All nonannotated cells in these areas are systematically assigned to the nonmitotic class. This database is augmented by breast cancer images from the “Tumor Assessment Proliferation Challenge 2016,” containing about 1500 mitotic cells from 73 patients and collected from three different laboratories. Finally, 1404 additional mitotic patches were obtained from the MITOS and ATYPIA 2014 challenge in which about 22 breast cancer slides were annotated by pathologists from the Pitié-Salpêtrière Hospital in Paris. Results: The evaluation of this model on 20% of the collected dataset gives us very encouraging results, i.e., a receiver operating characteristic-area under the curve close to 0.98. Testing on a larger basis with images from multiple medical centers is required to verify the model performance 11. Discussion/Conclusion: The next step in this study is to evaluate this approach on a larger dataset from different centers to verify its generalization to new data and other organs. Our future work will also be devoted to evaluating the use of such an algorithm in a medical routine.
| References|| |
- Magee D, Treanor D, Crellin D, Shires M, Smith K, Mohee K, et al. Color normalization in digital histopathology images. 2009. Available from: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.442.4332&rep=rep1&type=pdf.
- Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention; 2015. p. 234-41.
- Bai M, Urtasun R. Deep watershed transform for instance segmentation. PLoS One 2021;16:e0254586.
| Pushed across the Digital Divide: COVID-19–Accelerated Pathology Training onto a New Digital Learning Curve|| |
Lewis A. Hassell1, JoElle Peterson1, Liron Pantanowitz2
1Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA, 2Department of Pathology, Michigan Medicine, University of Michigan, Ann Arbor, MI, USA.
E-mail: [email protected]
Background: Incorporation of digital teaching materials into residency training programs has followed a slow adoption curve expected for many new technologies over the past two decades. The COVID-19 pandemic dramatically shifted the paradigm for many resident teaching modalities as teaching institutions instituted social-distancing measures to prevent spread of the novel coronavirus. The impact of this shift on pathology trainee education has not been well studied. Methods: We conducted an online survey of pathology trainees, program directors, and faculty to assess pre- and post-COVID-19 use of, and response to, various digital pathology modalities. Responses were solicited through both social media and directed appeals. Results: A total of 261 respondents (112 faculty, 52 program directors, and 97 trainees) reported a dramatic and significant increase in the use of digital pathology-related education tools. A significant majority of faculty and program directors agreed that this shift had adversely affected the quality (59% and 62%, respectively) and effectiveness (66%) of their teaching. This perception was similar among learners relative to the impact on quality (59%) and effectiveness (64%) of learning. A majority of respondents (70%–92%) anticipate that their use of digital pathology education tools will increase further or remain the same post-COVID. Conclusions: The global COVID-19 pandemic created a unique opportunity and challenge for pathology training programs. Digital pathology resources were accordingly readily adopted to continue supporting educational activities. The learning curve and utilization of this technology were perceived to impair the quality and effectiveness of teaching and learning. Since the use of digital tools appears poised to continue to grow post-COVID-19, challenges due to impaired quality and effectiveness for teaching and learning pathology will need to be addressed.
| Tumor Tissue Identification Technology by Estimating Features of Immunostaining Images from Hematoxylin and Eosin-Stained Images using Convolutional Neural Networks|| |
Hideharu Hattori1, Yasuki Kakishita1, Akiko Sakata2, Atushi Yanagida2
1Hitachi, Ltd., Research and Development Group, Tokyo, Japan, 2Hitachi, Ltd., Hitachi General Hospital, Ibaraki, Japan.
E-mail: [email protected]
Background: Pathologists visually observe hematoxylin and eosin (HE)-stained images under a microscope to perform pathological diagnosis. If it is not possible to sufficiently diagnose by judging shape using HE-stained specimens alone, it is necessary to add another evaluation method such as immunohistochemistry (immunostaining). Methods: To accurately and rapidly identify a tumor, this study proposes a method of automatically identifying a tumor in a pathological image by estimating features of immunostaining from a HE-stained image. The method consists of three steps: (1) features of tumor presence or absence are extracted from the HE-stained image and immunostaining image using a convolutional neural network (CNN), (2) a classifier is created so that the features obtained from the HE-stained image approach the features of the presence or absence of a tumor stained by immunostaining by using the CNN, and (3) the presence or absence of a tumor is judged by using the classifier and HE-stained image only. Results: The experimental results using digital images of pathological tissue specimens of prostate cancer show improved identification accuracy. The proposed method improved 10.6%–92.5% for sensitivity and 2.5%–86.9% for specificity compared with a classifier created using only features extracted from HE-stained images. In addition, as a result of visualizing classification basis, the classifier of the proposed method could estimate features similar to those of immunostaining images from HE-stained images. Conclusions: By this method, not only the features of HE-stained image but also the features of immunostaining estimated from HE-stained image are used to create a classifier, which improves the accuracy of tumor identification in pathological images. Therefore, it was shown to be effective as a method for identifying pathological images.
| References|| |
- Hattori H, Kakishita Y, Sakata A, Yanagida A. Tumor tissue identification technology by estimating features of immunostaining images using convolutional neural networks. Med Imaging Technol 2019;37:147-54.
- Kakishita Y, Hattori H. Classification reasons visualization of deep neural network using model inverse analysis. Transact Mathemat Model Its Appl 2019;12:20-33.
| Copy Number Analysis and Mutational Signature of Mature B-Cell Neoplasms using an In-House Custom Bioinformatic Pipeline|| |
Lin He1, Erika Villa2, Guillaume Jimenez2, Dwight Oliver1, Jefferey Gagan1, Brandi Cantarel2
1Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA, 2Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
E-mail: [email protected]
Background: Mature B-cell neoplasms are common hematopoietic malignancies, but detailed molecular characteristics have not been fully understood. Next-generation sequencing (NGS) has become more widely used to characterize the molecular genetics of hematopoietic malignancies. Here, we present a bioinformatic platform to integrate NGS assay data to characterize copy number alteration and mutational signatures of mature B-cell neoplasms. Methods: Fifty-five samples from 53 patients were sequenced by an NGS system (Illumina Inc., CA, USA). Thirty of 55 samples were chronic lymphocytic leukemia/small lymphocytic lymphoma, 8 diffuse large B-cell lymphoma, 5 mantle cell lymphoma, and 5 plasma cell myeloma. The NGS data with 1425-gene assay capacity were processed and integrated by an in-house custom bioinformatic pipeline that combines eight genomic programs and allows visualization and analysis through cBioPortal platform (MSKCC, NY, USA). For each tumor, mutation spectrum was compared to copy number alteration if present. Thirty mutational signatures (data available in 45 samples), each of which represents a unique pattern of Watson-Crick base pair change and attributes to a specific biological origin, were analyzed by MuSiCa-R application (GPtoCRC Research Group, Spain). Results: There were 226 significantly mutated genes identified in 32 of 55 samples (range: 1–75 per sample). The most common mutant genes included TP53 (14.6%), CHEK2 (12.7%), MDC1 (10.9%), ATM (9.1%), MSH3 (9.1%), and KMT2D (9.1%). Copy number gain was present in 182 genes, while copy number loss was present in 155 genes. TP53 mutation was frequently associated with copy number alteration (6 of 8 mutant genes), while none of the seven CHEK2 and six MDC1 mutant genes were accompanied by any copy number alteration. The most frequent mutational signatures included #3, #1, and #15, which were attributed to proposed etiologies of BRCA1/2, age, and defective DNA mismatch repair, respectively. Conclusions: An in-house custom bioinformatics pipeline has been established to characterize genomic features of mature B-cell neoplasms and can be potentially applied to other tumor types. Interestingly, mutational signatures of mature B-cell neoplasm have been found to be associated with BRCA pathway, suggesting that B-cells may share similar pathogenesis of DNA double-strand break repair as is found more commonly in other cancer types.
| Cross-Cancer Deep Interactive Learning with Reduced Manual Training Annotation for Pancreatic Tumor Segmentation|| |
David Joon Ho1, Akimasa Hayashi1, Shigeaki Umeda1, Chad M. Vanderbilt1, Christine A. Iacobuzio-Donahue1, Thomas J. Fuchs1
1Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
E-mail: [email protected]
Background: Pancreatic cancer is known to have low survival rates. Deeper investigations such as subtyping and outcome prediction are necessary to understand pancreatic cancer. Digital and computational pathology enables to utilize a large dataset for these investigations. Automated tumor segmentation is a prerequisite step, but it requires lots of manual tumor and nontumor training annotation to train a machine learning model which can be time-consuming. We recently developed deep interactive learning (DIaL) to minimize pathologists' annotation time by iteratively annotating mislabeled regions to improve a model. In this work, we use DIaL with a pretrained model from a different cancer type to reduce manual training annotation on pancreatic pathology images. Methods: Our cohort contains 759 cases with pancreatic ductal adenocarcinomas whose primary sites are pancreas. We fine-tuned a pretrained breast model using DIaL to segment pancreatic carcinomas. During the first iteration, a pathologist annotated false positives on nontumor subtypes that are not presented on breast training images. During the second iteration, the pathologist annotated false negatives on pancreatic carcinomas. Results: The pathologist spent total 3 h to annotate 14 pancreatic pathological images. For numerical evaluation, 23 other images balanced between well-differentiated, moderately differentiated, and poorly differentiated cases were selected and exhaustively annotated by another pathologist. We achieved precision = 0.621, recall = 0.748, and intersection-over-union = 0.513. Conclusions: We introduced a technique to train a model by cross-cancer DIaL with 3 h of annotation for accurate pancreatic tumor segmentation predictions. We plan to use this model to subtype pancreatic cancer for outcome prediction.
| Deep Learning–Based Mitoses Recognition and Concordance Study with Pathologists|| |
Arpit Jadon1, Sripad Joshi1, Harish Prabhala1, Vikas Ramachandra1, Lata Kini1, Aditya Kulkarni2, Swarnalatha Gowrishankar2
1Onward Assist, Hyderabad, Telangana, India, 2Department of Pathology, Apollo Hospitals, Hyderabad, Telangana, India. E-mail: [email protected]
Background: Deep learning–based mitosis detection in hematoxylin and eosin biopsies can benefit in the early diagnosis and prognosis of cancer. Most of the approaches published so far have utilized publicly available ideal datasets, which usually come from expensive whole-slide image scanners or microscopes. Moreover, very few validation studies have compared the efficacy of these algorithms with respect to inter-pathologist variability. In this study, we compare the performance of multiple pathologists with respect to the algorithm and inter-observer variation. Materials and Methods: The dataset used in this study consisted of 358 high-power field images taken for training the deep-learning algorithm while 301 taken for testing. Two pathologists annotated the mitotic cells in these cases for this study. The images were collected from a low-cost CMOS camera attached to a microscope. We used deep learning–based segmentation models such as U-Net supported with various pre- and post-processing techniques to develop our algorithm. Results: Our machine learning (ML) algorithm showed encouraging results with f-scores of 73.2% w.r.t pathologist 1 and 74% w.r.t pathologist 2. The f-score of pathologist 1 w.r.t pathologist 2 was 76.8% while pathologist 2 w.r.t pathologist 1 was 76.5%. Conclusion: The ML model's performance was quite close to that of a pathologist. Pathologists may be prone to interobserver variations in mitoses counts due to subjectivity and accepting this while making an AI model can help in setting realistic goals for production-ready models. In our future studies, we aim to increase the scale in terms of the numbers of training and testing data.
| Color Calibration for Digital Cytology Scanner|| |
Raymond Jenoski1, Sid Mayer1, Randall Marks2, Richard Salmon3
1Diagnostic Instrument Engineering, Hologic, Marlborough, MA, USA, 2RM Photonics, San Jose, CA, USA, 3FFEI Ltd., Hemel Hempstead, UK.
E-mail: [email protected]
Background: Color calibration is complex. The method by which whole-slide imager (WSI) systems observe color is different from the way the human eye does; thus, images need to be processed to interpret these differences. The standard method of determining the difference between color truth and the WSI image is imaging a calibrated color slide and comparing spectral variations via CIE ΔE standards. Further complexity is added to this process when the WSI manufacturer must determine what colors the scanner requires for adequate calibration. The next issue is to determine how much variation is acceptable for the human eye and computer algorithms. The construction of the calibration slide must also be considered. Methods: Hologic's digital cytology system utilizes the Sierra slide to establish the truth for color calibration. The WSI instrument scans the Sierra slide, and the WSI variation (ΔE) is calculated by comparison to the Sierra spectral data. With the difference quantified, a specific instrument ICC profile can be generated. The ICC profile adjusts the WSI scanner digital image to ground truth color. Results: Results indicate that the Hologic's digital imager (WSI) when using Sierra slide reduces color variation to less than 2 ΔE for intended color range. When the digital image is passed into either a review station or an algorithm platform, the color fidelity is maintained. Conclusions: WSI systems seeking usage for primary diagnosis require color calibration. The Sierra slide provides acceptable color calibration for the Hologic's cytology WSI system. The amount of spectral variation (ΔE) acceptable for WSI is subjective based on many factors, with most literature citing ΔE of 5 or less. Calibration minimizes system to system variability, producing repeatable color output regardless of system age or optical component variation.
| Deep Learning–Based Segmentation and Grading of Pancreatic Intraepithelial Neoplasia|| |
Dev Kumar Das1, Jia Rao2, Rim Sabrina Jahan Sarker2, Katja Steiger2, Alexander Muckenhuber2, Tijo Thomas1, Uttara Joshi1
1AIRA Matrix Private Limited, Mumbai, Maharashtra, India, 2Institute for Pathology, Comparative Experimental Pathology, Medicine Faculty, Technical University, Munich, Germany.
E-mail: [email protected]
Background: Pancreatic intraepithelial neoplasia (PanIN) is a possible microscopic epithelial precursor lesion of pancreatic ductal adenocarcinoma (PDAC). Existing imaging modalities cannot accurately identify PanIN lesions preoperatively; they can be identified only on histopathological examination. We propose a deep learning–based method for the detection and grading of PanIN lesions in histopathological sections of the pancreas. Methods: Whole-slide images (WSIs) were acquired by digitizing paraffin-embedded pancreatic tissue sections using an Aperio AT2 slide scanner at ×40. A deep neural network model (DeepLabv3) was developed on 1024 × 1024 size tiles from the WSI and trained to classify lesions into high-grade or low-grade PanIN. The training and testing data set are comprised of 10 and 27 WSI, respectively. Testing outputs were compared to the pathologists' annotation results (gold standard). Results: Concordance of algorithm with the pathologist annotation results to detect high/low-grade PanIN showed 91.56% sensitivity and 85.68% specificity. In some cases, acinar cell foci or invasive carcinoma was misclassified as high-grade PanIN by the algorithm. The aim is to further improve the performance for PanIN detection as well as for identification of invasive carcinoma. Conclusions: This algorithm provides quantification-based segmentation and classification of PanIN, facilitating early detection of precursors to PDAC, potentially accelerating pathological workup. The application can be further extended for differential diagnosis of precursor neoplastic lesions such as intraductal papillary mucinous neoplasm and mucinous cystic neoplasm to develop a comprehensive early detection solution for PDAC precursors.
| Characterization of the Three-Dimensional Microanatomy of the Pancreas and Pancreatic Cancer in situ at Single-Cell Resolution|| |
Ashley Kiemen1, Alicia Braxton2, Seung-Mo Hong3, Toby Cornish4, Laura Wood2, Ralph Hruban2, PeiHsun Wu1, Denis Wirtz1, 2, 5,6
1Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, Maryland, USA, 2Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA, 3Department of Pathology, University of Ulsan, Ulsan, South Korea, 4Department of Pathology, University of Colorado School of Medicine, Aurora, Colorado, USA, 5Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA, 6Department of Materials Science and Engineering, The Johns Hopkins University, Baltimore, Maryland, USA.
E-mail: [email protected]
Pancreatic ductal adenocarcinoma is one of the deadliest forms of cancer. Cancer cell metastasis represents a substantial obstacle for achieving long-term remission in patients and is the leading cause of cancer-associated death. Accumulating evidence indicates that the tumor microenvironment is highly associated with cancer invasion through regulation of cellular physiology, regulatory systems, and gene expression profiles of cancer cells. Yet there lacks a general understanding of how pancreatic tumor microenvironment evolves from normal pancreas architecture. Obtaining high-content and high-resolution information from a complex tumor microenvironment in large volumetric landscape is critical to understanding cancer invasion and represents a key challenge in the field of cancer biology. To address this challenge, we established a novel method to reconstruct three-dimensional centimeter-scale tissues from serially sectioned histological samples, utilizing deep-learning approaches to recognize nine distinct tissue components from hematoxylin and eosin-stained sections at micrometer and single-cell resolution. Using tissue blocks containing a range of normal, precancerous, and cancerous states, we show the large-scale landscape of cancer invasion and patterns in the evolution of the tumor microenvironment.
| Upconversion Nanoparticles as a Tool for Histopathological Tissue Evaluation with Multiplexing and Machine Learning Potential|| |
Krzysztof M. Krawczyk1, Matthias J. Mickert1, Stefan Andersson-Engels2, Anders Sjögren1
1Lumito AB, Lund, Sweden, 2[email protected], Department of Biophohotonics, Tyndall National Institute, Cork, Ireland.
E-mail: [email protected]
Background: In the field of histopathology, pathologists diagnose patients by assessing imaged tissue sections. Even with the pathologists' trained eye, there is a significant risk for misdiagnosis. For decades, the hematoxylin and eosin (H and E) stain has been a standard way to visualize the morphology of cells. A common way to specifically detect proteins of interest is the chromogenic diaminobenzidine (DAB) labeling combined with a counterstain to visualize the cell nuclei. However, this method suffers from a narrow dynamic range, problems with quantification, and difficulties with multiplexing and colocalization. Techniques based on immunofluorescence offer the possibility for a quantitative readout but suffer from photobleaching, background fluorescence, and broad excitation-emission bands, leading to optical crosstalk in multiplexed applications. Here, we present a technology based on upconversion nanoparticles (UCNPs) allowing us to overcome major problems associated with commonly used imaging techniques. UCNPs are excited in the near-infrared and emit photons with higher energy (photon upconversion). This process drastically reduces the measurement background by removing any autofluorescence of the matrix and minimizing light scattering. Together with a remarkable photostability, UCNPs are also a perfect candidate for applications needed for long exposure times. Methods: Novel luminescent UCNPs combined with a prototype imaging device were used to detect selected markers, e.g., Her2, in the human tissue. Formalin-fixed paraffin-embedded breast cancer cell line and human breast cancer tissue were sectioned and labeled using an autostainer. The UCNP-based upconversion imaging of the human tissue sections was compared with standard DAB-based labeling. To improve the scanning speed, pulsed laser excitation and time-gated detection were explored. In addition, the combination of UCNP labeling with H and E co-staining and co-imaging was investigated. Results: Images obtained with our novel device proved that in-house–synthesized antibody–UCNP conjugates are an excellent label for the detection of cancer markers in human tissue sections. Our data show that UCNPs do not alter the bright-field images of the counterstain and hence do not interfere with standard tissue evaluation by a pathologist. In addition, bright-field and luminescent images can be merged to provide a better understanding of tissue morphology. Conclusions: The emerging field of UCNP-based labeling techniques offers new possibilities for more accurate diagnosis by combining the advantage of H and E staining with specific labeling of a marker of interest. The high-contrast images of the UCNP labeling – generated by our scanning device – set the foundation for generating ground truth for machine learning algorithms.
| A Feasibility Study Utilizing DNA-Barcoded Multiplex Fluorescence IHC for Immune Profiling in the Tumor Microenvironment of Bladder and Lung Samples|| |
Haydee Lara1, Chifei Sun1, David Krull1, Bao Hoang2, Fang Xie2, Leslie Obert3, Daniela Ennulat3, Andrew Gehman4, Paul Noto5, Vilma Decman1
1Department of Cellular Biomarkers, GSK, Collegeville, PA, USA, 2Department of Bioimaging, GSK, Collegeville, PA, USA, 3Department of Pathology, GSK, Collegeville, PA, USA, 4Department of Biostatistics, GSK, Collegeville, PA, USA, 5???, Experimental Medicine Unit, GSK, Collegeville, PA, USA.
E-mail: [email protected]
Background: The tumor microenvironment is a key in the study of progression and prognosis of cancer. Numerous studies confirm that the cell populations surrounding tumor nests and the invasive front are drivers of immune suppression or therapy response. Methods: In this study, we applied DNA-barcoded multiplex IHC (MTO, Ultivue) to a subset of bladder and lung tumor resections to study the tumor microenvironment. The multiplex IHC includes markers for T-cells (CD3), cytotoxic lymphocytes (CD8), cancer-killing cells (GZMB), immunosuppressive cells (PD-L1), macrophages (CD68), proliferating cells (Ki67), and a marker for tumor differentiation (PanCK). We used image analysis software (HALO, Indica Labs) for image processing and image analysis of different immune populations. Using whole-slide fluorescent images, we trained a random forest algorithm to classify the tumor regions and developed an algorithm for cell quantification in the different tumor compartments. Results: The quantitative data were then compared to a pathologist hot and cold classification. Similar to previous studies, we found that quantitative data consistently support hot and cold classification of tumors. Conclusion: We currently validate these methods to evaluate tumor biopsies in Phase 1 and Phase 2 clinical trials.
| Performance of Automated Classification of Diagnostic Entities in Dermatopathology|| |
Leah Levine1, Victor Brodsky2, Samer Dola3, Enric Solans3, Simon Polak1
1Mechanomind Inc, New York, NY, USA, 2Department of Pathology and Laboratory Medicine, Washington University in St. Louis, MO, USA, 3Scottlab Pathology, Chicago, IL, USA.
E-mail: [email protected]
Introduction: New automated image analysis methods are continuously being developed to improve objectivity and efficiency and to address the error rate associated with human medical image interpretation. Here, the performance of a supervised convolutional neural network-based algorithm pretrained to classify 40 skin tumor types is evaluated via comparison to diagnoses issued by two senior board-certified pathologists not involved in algorithm creation in classifying six representational diagnostic entities in dermatopathology. Hypothesis: Deep-learning multiclass classification algorithms can reach clinical-grade accuracy in differential diagnostics within histopathology in real-world pathology environment. Methods: Hematoxylin and eosin-stained glass slides for 300 biopsy cases (punch, shave, and excisional biopsies) of face, neck, back, and arms; 87% of the cases were from Scottlab Pathology, Ltd, from Caucasian population, 9% of the cases were from Tanzania, and 4% of the cases were from Rwanda, from African population. The slides had notable variability in quality of staining and contained artifacts such as folds and ink markings. The cases had confirmed diagnoses of either basal cell carcinoma (BCC) (34%), nodular melanoma (20%), lentigo maligna melanoma (6%), superficial spreading melanoma (5%), dysplastic nevus (3%), intradermal nevus (26%), or compound nevus (6%), initially diagnosed by light microscopy and confirmed by board-certified pathologists. Glass slides were scanned using high- and low-resolution scanners set at ×40 magnification. The scanned whole-slide imagers (WSIs) were then independently interpreted by the Mechanomind image recognition algorithm at the University of Chicago, Ingalls Memorial Hospital Medical campus, and classified into one of three diagnostic classes: BCC, melanoma, or nevus. Each WSI took 10–20 s to analyze. Two senior board-certified pathologists served as the primary evaluators. Results: The sensitivity of the image recognition algorithm was 97.8% for the 91 melanoma cases, 100% for the 107 nevus cases, and 99.0% for the 102 BCC cases. The corresponding specificities were 97.5% for the melanoma and nevus cases and 100% for BCC cases. One of the cases initially misdiagnosed by the pathologist as melanoma was correctly recognized by the algorithm as a nevus. Conclusion: While additional studies with more lesion types, more cases, and more pathologists are needed, high specificity and sensitivity algorithms appear to be ready for use in screening, quality assurance, and workload distribution.
| Classification of Nonsmall Cell Lung Cancer Adenocarcinoma via Interactive Annotation Representation Learning|| |
Peter Louis1, Tahsin Reasat1, David S. Smith2, Travis Osterman3
1Department of Pathology and Laboratory Medicine, Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, 1Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA, 2Institute of Imaging Science, Vanderbilt University, Nashville, USA, TN, 3Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA.
E-mail: [email protected]
Background: Studies have demonstrated that the morphology on hematoxylin and eosin (HE) whole-slide images can be histologically classified by computer vision algorithms and has the potential to indicate the presence of genetic drivers. However, these algorithms need a large amount of annotated data. Whole-slide images often have gigapixel resolution, which makes it difficult to perform manual annotation. In this work, we employed a GAN via a novel interactive representation learning patch-level annotation framework on whole-slide images of non-small cell lung cancer adenocarcinoma (NSCLA) to classify the lepidic histological subtype pattern of NSCLA. Methods: The pretrained acronym “GAN” stands for generative adversarial network was developed by computational pathologists at the University of Glasgow. The test set data source consisted of 437 lung images obtained from PathLink, which is a tissue and image repository associated with Vanderbilt's data warehouses. Five NSCLA lepidic subtype whole-slide HE images were extracted from this PathLink cohort. These images were then annotated with QuPath segmentation API to establish the ground truth. The images were divided into patches of 224 × 224 with 50% overlap. For data augmentation, the images were rotated. This resulted in 218,000 ground truth patches for the interactive patch-level annotation framework. Foreground annotation selections designated lepidic morphology while background annotation selections designated nonlepidic morphology. The real-time classification occurred via logistic regression in the Napari segmentation “API” stands for application programming interface. Results: Generalization performance of the GAN interactive patch-level annotation model was measured against ImageNet dataset weights derived from a ResNet50 model the original weights of developed GAN. Accuracy, precision, recall, area under the curve, and the F1 score were calculated for evaluating performance relative to the number of foreground and background annotations for each trial. Five annotation trials were performed for real-time annotation of two lepidic image for each dataset weights. Conclusion: Both the GAN and ImageNet models were able to distinguish significant amount of NSCLA lepidic subtype tissue. However, the model with ImageNet-based weights was prone to creating more false positives and required more foreground and background patch annotations to reach the target F1 score of 0.70 compared to model with the GAN.
| Multicenter Weakly Supervised Computational Pathology on Whole-Slide Images using Federated Learning|| |
Ming Y. Lu1,3, Dehan Kong1, Jana Lipkova1,3, Richard J. Chen1, 2, 3, Rajendra Singh5, Drew F. K. Williamson1,3, Tiffany Y. Chen1,3, Faisal Mahmood1, 3, 4
1Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA, 2Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA, 3Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA, 4Cancer Data Science, Dana-Farber Cancer Institute, Boston, MA, USA, 5Department of Pathology, Northwell Health, NY, USA.
E-mail: [email protected]
Background: Deep learning has excelled in a wide range of tasks in computational pathology, including but not limited to the characterization of both common and rare morphological phenotypes and the prediction of molecular alterations from histology. However, large datasets of diverse, high-quality annotated training examples are often necessary to build robust and accurate deep learning–based models for real-world test scenarios. Pooling medical data from multiple institutions can alleviate this challenge but is constrained by privacy concerns associated with sharing sensitive patient information, in addition to the technical challenges of transferring and storing large whole-slide image files at the terabyte or even petabyte scale. On the other hand, federated learning is a promising field of machine-learning research that allows algorithms to learn from multiple decentralized data sources without requiring training data to be directly shared. Methods: We introduce a federated-learning framework for computational pathology on whole-slide images using attention-based multiple instances learning for weakly-supervised learning and random noise generation for preserving differential privacy. We evaluate our approach on two different classification problems using large multicenter datasets, including renal cell carcinoma histologic subtyping and invasive breast carcinoma histologic subtyping. We additionally demonstrate that our approach can be used for survival prediction and patient stratification from whole-slide images. Results: Our proposed privacy-preserving federated learning framework can be used to develop accurate, interpretable deep-learning models from distributed data sources for both classification and survival prediction. Conclusion: We showed our attention-based multiple instance learning inspired framework is applicable to both classification and survival prediction using whole slide images and slide-level labels only, and when combined with federated learning, enables accurate deep learning-based computational pathology models to be developed from multiple data sources while preserving differential privacy for each site.
| Digital Pathology Validation Study for Frozen Section Diagnosis at the University of Kentucky|| |
Emmy Mbagwu1, Sarah Higdon2, Shadi Qasem2
1Department of Pathology, University of Kentucky College of Medicine, Lexington, KY, USA, 2Department of Pathology, University of Kentucky College of Medicine, Lexington, KY, USA.
E-mail: [email protected]
Background: The Department of Pathology installed a slide scanning microscope for remote interpretation of frozen sections. We summarize the process and the challenges for our institution. Methods: A validation study set of 40 frozen section cases was digitalized using a Sakura Visiontek M6 digital microscope robotic imaging system. The cases were selected to represent a variety of organ systems. All slides were scanned at ×20, apart from touch preparations, which were scanned at ×40. Eight surgical pathologists reviewed the scans, independently, and their interpretations were compared to the final diagnosis rendered on the permanent sections. Major discrepancies were defined at the level of benign versus malignant, while minor discrepancies were defined as not giving enough information about the diagnosis or an inaccurate diagnosis in a family of benign or malignant neoplasms. Results: The rate of major discrepancies ranged from 0 to 2 (average of 2.2%), minor discrepancies from 0 to 3 (average of 4.7%), and deferrals from 1 to 4 (average of 6.3%). Two cases accounted for the majority of major discrepancies; one salivary gland and one bone tumor. Gynecological pathology and liver pathology were the most prevalent minor discrepancies (47% and 20%, respectively). Conclusion: A digital slide system appears to perform relatively well, but inferior to conventional slides. The rates of discrepancy did not appear to be affected by years of experience. There are some limitations to the design of this study which may affect the overall concordance.
| From Multiplex Immunostaining to Tissue Cytometry: Development of Biologically-Guided Segmentation Strategies for Quantitative Image Analysis|| |
Mark Zaidi1,2, Fred C. Fu1, Veronica Cojocari1, Jade Bilkey1, Trevor D. McKee1, 3, 4
1Computational Pathology and Image Analysis Core, STTARR Innovation Centre, University Health Network, Toronto, Canada, 2Department of Medical Biophysics, University of Toronto, Toronto, Canada, 3Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada, 4Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada. E-mail: [email protected]
Background: The advent of highly multiplexed methods for antibody staining, including both imaging mass cytometry and multiplex fluorescence, allows for precise understanding of relationships between multiple protein biomarkers. These methods have advanced the capability to perform “tissue cytometry”– flow cytometry-like analysis on tissue sections. However, a major challenge for effective implementation of cytometric analysis on multiplex images remains the availability of robust segmentation algorithms to break the image into subregions representing single cells. Methods: Our core facility, working in close collaboration with several laboratories, has developed image segmentation pipelines for handling immune cell segmentation that takes into account the biological domain knowledge to assist with cell segmentation at the individual cell scale. Utilizing the background immunological understanding of cell surface markers obtained from flow cytometric analysis, the segmentation is guided in such a way as to avoid the appearance of “biologically impossible” marker combinations. Results and Conclusions: Biologically-guided segmentation has successfully been applied to a number of tissues stained with multiplex imaging mass cytometry approaches. Validation against manual annotations reveals an improvement in the accuracy of segmentation over traditional image analysis methodologies. Data visualization techniques adopted from flow cytometric analysis, including scatterplots alongside manually labeled cell populations, allow for accurate determination of image-based thresholds for cell identification. This modified strategy allows for a more thorough interrogation of the resulting dataset, assessing not only distinct cellular populations but also spatial relationships between distinct populations, providing a robust methodology to assess immune cell subpopulations in a spatial context.
| Spatial Heterogeneity of CD3, CD4, CD8, CD20, and FoxP3 Immune Markers in Triple-Negative Breast Cancer Analyzed using Digital Pathology|| |
Haoyang Mi1, Chang Gong1, Jeremias Sulam1,2, Elana J. Fertig1,3, Alexander S. Szalay4,5, Elizabeth M. Jaffee3,6, Vered Stearns3, Leisha A. Emens7, Ashley M. Cimino-Mathews3,8, Aleksander S. Popel1,3
1Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA, 2Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, MD, USA, 3Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, MD, USA, 4Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD, USA, 5Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA, 6The Bloomberg~Kimmel Institute for Cancer Immunotherapy, Johns Hopkins School of Medicine, Baltimore, MD, United States, 7Department of Medicine/Hematology-Oncology, University of Pittsburgh Medical Center, Hillman Cancer Center, Pittsburgh, PA, USA, 8Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
E-mail: [email protected]
Background: Recent advancements in digital pathology facilitate the understanding toward the role of tumor microenvironment (TME) in governing the triple-negative breast cancer (TNBC) progression. Assisted by image analysis and spatial statistics, key information about the spatial heterogeneity within the TME can be extracted. These analyses have been applied to CD8+ T-cells; however, quantitative analyses of other important markers and their correlations have not until now been possible. Methods: In this study, we developed a computational pathology pipeline for characterizing the spatial arrangements of five immune markers (CD3, CD4, CD8, CD20, and FoxP3), and then, the functionality is tested on 25 whole-slide images from patients with TNBC. To start, immune marker-labeled cells are segmented and colocalized. Then, this information is converted to point patterns. Subsequently, invasive front (IF), central tumor (CT), and normal tissue (N) are annotated. At this point, intratumoral heterogeneity can be analyzed. The pipeline is then repeated for all specimens to capture intertumoral heterogeneity. Results: In this study, we observe both intra- and inter-tumoral heterogeneities for all specimens across all five immune markers. Compared to CT and N, IF tends to associate with higher densities of immune cells and overall larger variations in spatial model fitting parameters and higher density in cell clusters and hotspots. Conclusions: Results suggest a distinct role of IF in the tumor immuno-architecture. Although the sample size is limited in the study, the computational workflow could be readily reproduced and scaled due to its automatic nature. Importantly, the value of the workflow also lies in its potential to be linked to treatment outcomes and identification of predictive biomarkers for responders/nonresponders and its application to parameterization and validation of computational immuno-oncology models.
| Do Immunohistochemistry Results from Single Two-Dimensional Histological Sections Reflect Biomarker Density in Three-Dimensional Tumor Specimens in Mouse Tumor Models?|| |
Sepideh Mojtahedzadeh1, Alan Opsahl1, Dingzhou Li2, Joan Aguilar1, Timothy Coskran1, Shawn P. O'Neil1, Sripad Ram1
1Global Pathology and Investigative Toxicology, Drug Safety R&D, Pfizer, Inc., La Jolla, USA. 2Drug Safety Statistics, Drug Safety R&D, Pfizer, Inc., La Jolla, USA.
E-mail: [email protected]
Background: Digital image analysis (DIA) of immunohistochemistry (IHC) assays is routinely performed to quantify immune cell infiltration in the tumor microenvironment for immuno-oncology projects. A retrospective analysis of our internal IHC-DIA data revealed significant variability in cell density estimates for nine immune cell biomarkers. To identify the sources of variability and to facilitate determination of group sizes in treatment arms, we performed a series of experiments to evaluate the distribution of cells expressing nine immune cell biomarkers in four different murine tumor models. Methods: We designed a study in which IHC was performed for nine immune cell biomarkers using four tumor models (CT26, EMT6, KPC [ortho], KPC [GEM]). IHC was performed on a Leica Bond-III autostainer. Slides were scanned on an Aperio AT2 whole-slide scanner and images were analyzed using custom algorithms created in Visiopharm software. Results: We assessed (1) IHC assay reproducibility by performing the IHC protocol on 15 serial sections, (2) intra-animal variability by performing the IHC protocol on 10 step sections (100 microns/step), and (3) inter-animal variability by comparing DIA results among different animals within each tumor model. The results of power analysis on the group size and coefficient of variation on the IHC-DIA data will also be presented. Conclusion: Our results reveal that inter-animal variability is the main source of variation for immune cell markers in all four tumor models. This study provides a comprehensive analysis of immunologic heterogeneity in murine tumor models. We anticipate that these results will provide guidelines for designing preclinical studies for drug discovery and development.
| Deep-Learning Algorithm for Biomarker Classification on Multiplexed Immunofluorescence Images using Repel Coding|| |
Satarupa Mukherjee1, Yao Nie1, Jim Martin1
1Imaging and Algorithms, Digital Pathology, R&D, Roche Tissue Diagnostics, Santa Clara, California, USA. E-mail: [email protected]
The study of the heterogeneous tumor microenvironment has significant clinical implications. New assays to guide immunotherapy require accurate characterization of interactions between multiple phenotypes (e.g., tumor cells and different types of immune cells) in the tumor microenvironment. Multiplexed immunofluorescence (IF) enables us to understand the complexity and heterogeneity of the immune context of tumor microenvironments and its influence on response to immunotherapies. In this project, we report the development of an automated digital pathology algorithm for biomarker classification using a deep-learning approach. The biomarker used is PD1 which is primarily expressed on lymphocytes. We develop an end-to-end deep-learning algorithm to perform both cell detection and classification using only point labels. To perform better localization of cells, we use repel coding which is an enhanced center coding of cells. To the best of our knowledge, repel coding is used for the first time for deep learning–based classification of biomarker on IF images. We have validated our algorithm on multiple IF-stained tissue types, which includes gastric, pancreas, lung, breast, colon, and bladder. The algorithm achieves more than 85% accuracy on all tissue types.
| Visualization of Artificial Intelligence Model for Classification of Colorectal Cancer|| |
Aniruddha Mundhada1, Anurag Mundhada2, Gokul Kripesh1, Lawrence DCruze1, Sandhya Sundaram1
1Department of Pathology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India, 2Insane AI, Bangalore, Karnataka, India. E-mail: [email protected]
Background: There is a growing need for interpretability of neural networks to humans. This is especially important in pathology, as a diagnosis directly impacts lives and because of regulatory and compliance reasons. Commonly used feed-forward neural networks work by progressively refining representations of an input image, using hundreds of layers. A final layer makes the classification decision based on such a deep representation. Recent techniques developed for understanding why a model makes a certain decision involve visualization of the intermediate representations learned by the network. Such an investigation can elucidate where a model is focusing, and what features are being observed, when it is given a test image. The aim of this study is to visualize the inner workings of a neural network trained for classification of colorectal cancer images into tumor and nontumor classes. Methods: We train a classifier on colorectal histology dataset for colorectal cancer cell classification. We present quantitative accuracy metrics for classification. Further, we examine the visualizations of a convolutional neural network VGG16 at several intermediate layers using the Grad CAM technique. We separately look at cases when the model has made the right decision and when it is wrong about the class of the input image, and we highlight the features learned by the model as important for making decisions. Results: There are eight classes of tissue visualized, namely – tumor, stroma, complex, lymphocyte, debris, mucosa, adipose, and empty. The precision, recall, and F1 score for all classes are reported along with the overall accuracy and average (mean and weighted). Conclusions: We can observe the area of interest where the convolutional neural network is responsible for decision. Further, accuracy suffers in areas adjacent to complex stroma and adipose cells, making it clinically relevant for the pathologist to verify diagnosis in case of numerous adipocytes. This serves as a useful adjunct screening tool to reduce the time spent searching for areas of interest on a slide. Further research will examine whether the features learned by the network are clinically relevant.
| A Deep-Learning Convolutional Neural Network Can Differentiate between Most Clinically Relevant Patterns of Colitis|| |
Kaitlyn J. Nielson1, Andrew Sanchez2, Fred A. Schultz1, Jordan Redemann1, David R. Martin1, Joshua A. Hanson1
1Department of Pathology, University of New Mexico, Albuquerque, New Mexico, USA, 2University of New Mexico School of Medicine, Albuquerque, New Mexico, USA. E-mail: [email protected]
Background: There are no studies evaluating the ability of a convolutional neural network (CNN) to recognize various patterns of colitis. We investigate whether a CNN can differentiate between different types of colitides, including active colitis not otherwise specified (AC NOS), microscopic colitis (MC), chronic active colitis (CAC, AKA inflammatory bowel disease pattern), and ischemic colitis (IC), as well as histologically normal colon (NC). Methods: A total of 312 cases were reviewed by two gastrointestinal (GI) pathologists to establish gold standard consensus diagnoses. The cases were then scanned and analyzed by HALO-AI (Indica Labs, Albuquerque, NM, USA) via randomizing 198 (63%) to a training set (AC NOS = 46, CAC = 52, IC = 30, MC = 40, and NC = 30) and 114 (37%) to a test set (AC NOS = 14, CAC = 34, IC = 20, MC = 26, and NC = 20). A HALO-AI correct area distribution cutoff of ≥50% was required to credit the CNN with a correct diagnosis. Results: Overall, the CNN results were 87% concordant with the gold standard diagnoses (99/114). CNN accuracy rates for each diagnostic category were as follows: AC NOS = 43% (6/14), CAC = 91% (31/34), IC = 95% (19/20), MC = 88% (23/26), and NC = 100% (20/20). Conclusions: A CNN can differentiate most clinically relevant patterns of colitis as well as NC histology. Although it struggles with the pattern of AC NOS (which could limit its usefulness in clinical practice), AC NOS is not often biopsied and is usually diagnosed clinically. Since a CNN can most accurately identify NC versus colitis in general, it may best be implemented as a screening tool to improve efficiency for large GI pathology groups.
| Image Analysis in Oncology Research: an Institutional Experience in How to Overcome Obstacles and Obtain Reliable Data|| |
Antonio C. Ortiz1, Bruna V. Jardim-Perassi2, Michal R. Tomaszewski2, Joseph O. Johnson1, Robert J. Gillies2, Marilyn M. Bui3
1Analytic Microscopy Core, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA, 2Department of Cancer Physiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA, 3Department of Anatomic Pathology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, USA. E-mail: [email protected]
Image analysis is motivated by the desire to extract relevant biological data from digital images to answer underlying questions about the pathogenesis of disease and other biological phenomena. With the advent of automated whole-slide scanning came the demand for image analysis software equipped with machine learning and artificial intelligence. This brings new issues and obstacles in image analysis and for those who perform it. Here, we present common difficulties in image acquisition and analysis on a cohort of tissue samples processed through the Analytic Microscopy Core at Moffitt Cancer Center. We compare pros and cons between two types of image analysis software on images acquired on an Aperio AT2 whole-slide scanner. We provide a realistic workflow which addresses and circumvents common bottlenecks in data exploitation from pre- to post-acquisition. In addition, we offer strategies to overcome complications such as stitching artifacts from image acquisition, tissue heterogeneity and damage, variability in stain penetrance, and border definition during segmentation, which all result in variation between samples. Being mindful of these obstacles and verifying data postprocessing can lead to robust, reliable, reproducible results and biologically relevant data.
| Grundium versus Glass: A Look at the Use of the Grundium Ocus as a Telepathology Tool for Surgical Pathology and Cytopathology|| |
Megan Zilla1, Sara E. Monaco2, Juan Xing1, Douglas J. Hartman1, Michael Landau1, Liron Pantanowitz3
1Department of Pathology, University of Pittsburgh Medical Center, PA, USA, 2Department of Pathology, Geisinger Medical Center, PA, USA, 3Department of Pathology, University of Michigan, MI, USA.
E-mail: [email protected]
Background: The Grundium Ocus is a compact portable digital microscope and slide scanner ideal for desktop use [Figure 1]. It permits whole-slide imaging (WSI) as well as robotic microscopy. We compared the performance of this device to a conventional light microscope with glass slides for second opinion teleconsultation and telecytology. Methods: Ocus was compared to glass slide reads with a 2-week washout period for three use cases: (1) second opinion teleconsultation of gastrointestinal pathology cases (WSI), (2) second opinion teleconsultation of cytology cases (WSI), and (3) rapid on-site evaluation (ROSE) via telecytology (robotic). For each use case, two pathologists evaluated 20 slides (1 representative slide/case). Cytology cases included various preparations (smears and liquid-based specimens). Diagnoses and time spent on evaluating each case were recorded. Results: The Grundium device was feasible for WSI and robotic microscopy using a variety of specimen preparations. There was no significant difference between Grundium and glass slides for permanent histological sections. For cytology cases viewed by WSI, glass slides had significantly higher specimen adequacy rates (80% Ocus, 95% glass), malignancy rate (62.5% Ocus, 40% glass), and accuracy with final diagnosis (77.5% Ocus, 100% glass) and were quicker to evaluate. For ROSE by robotic microscopy, glass slides had higher adequacy rates and were significantly quicker to review (average: 3.4 min Ocus, 1.25 min glass). Conclusions: The Grundium Ocus device allowed hybrid WSI and robotic microscopy which thereby supported different telepathology use cases. The device is suitable for reviewing different histopathology slides. While glass slides were preferred for diagnostic interpretation and speed of use during ROSE, robotic microscopy permitted cytology cases to be remotely interpreted via telecytology.
|Figure 1: Grundium Ocus ® microscope scanner. This desktop hybrid device with one slide capacity has a small (3.5 kg, 18 cm × 18 cm × 19 cm) footprint|
Click here to view
| Morphological Features Computed using Deep Learning for an Annotated Digital Diffuse Large B-Cell Lymphoma Image Set|| |
Damir Vrabac1, Akshay Smit1, Rebecca Rojansky2, Yasodha Natkunam2, Ranjana H. Advani2, Andrew Y. Ng1, Sebastian Fernandez-Pol2, Pranav Rajpurkar1
1Department of Computer Science, Stanford University, Stanford, CA, USA, 2Department of Pathology, Stanford University, Stanford, CA, USA.
E-mail: [email protected]
Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin's lymphoma worldwide. DLBCL is fatal without treatment, but early and appropriate therapy can cure up to 70% of cases. The current best prognostic classification, the National Comprehensive Cancer Network International Prognostic Index, is insufficient to guide therapeutic decision-making for individual patients. Although histologically DLBCL shows a variety of morphologies, no morphologic features have been consistently demonstrated to correlate with prognosis. We present a morphologic analysis of histological sections from 209 DLBCL cases at Stanford Hospital with associated clinical and cytogenetic data. Duplicate tissue core sections were arranged in tissue microarrays (TMAs), and replicate sections were stained with hematoxylin and eosin (H and E) and immunohistochemical stains for BCL2, BCL6, MYC, CD10, and MUM1. The TMAs are accompanied by pathologist-annotated regions of interest (ROIs) that identify areas of tissue representative of DLBCL. We extracted patches from the H and E-stained ROIs and used a deep-learning model to segment tumor nuclei in the patches. We then computed several geometric features for each segmented nucleus and fit a Cox proportional hazards model to demonstrate the utility of these geometric features in predicting prognostic outcome. We found that the Cox model using only geometric features achieved a C-index (95% confidence interval) of 0.635 (0.574, 0.691). Although the prognostic significance of cell morphology has historically been unclear, our finding suggests that geometric features computed from tumor nuclei stained to show cell morphology can provide a prognostic marker, which can be validated in additional cohorts and prospectively.
| Deployment of a Multi-Tissue Artificial Intelligence-Based Quality Control System in Routine Clinical Workflow|| |
Judith Sandbank1,2, Ira Krasnitsky1, Issar Yazbin1, Inbal Gross1, Ronen Heled1, Arthur Rozenberg1,2, Ronen Cypis1, Rachel Mikulinsky1, Einav Shnaidman1,2, Geraldine Sebag1
1IBEX Medical Analytics Ltd., Tel Aviv, Israel, 2???, Institute of Pathology, Maccabi Healthcare Services, Rehovot, Israel.
E-mail: [email protected]
Background: Maccabi Healthcare Services, a large healthcare provider with a centralized pathology institute, handles some 140,000 histology accessions per year, of which approximately 700 are prostate core needle biopsies and 6850 are breast biopsies. The growing shortage in pathologists, alongside increased cancer incidence, has driven Maccabi to search for technologies to support their pathologists in their diagnostic work. IBEX Medical Analytics develops AI-based diagnostic solutions for pathology, including Galen™ Prostate CE-IVD, which detects and grades prostate core needle biopsies, and Galen Breast, which detects invasive and in situ carcinomas in breast biopsies. Methods: The underlying algorithms utilize state-of-the-art artificial intelligence (AI) and machine-learning techniques and were trained on many thousands of image samples. These images were obtained from slides from multiple laboratories and geographies, and manually annotated by senior pathologists. Results: Both algorithms were assessed for performance on independent data from various laboratories and demonstrated high specificity and sensitivity, including identification of cancers missed by pathologists. Maccabi has deployed both Galen Prostate and Galen Breast as a quality control (QC) system on all new prostate and breast biopsies entering the laboratory. The system raises an alert whenever it encounters a discrepancy between the automated analysis and the original diagnosis, prompting a second human review. Conclusions: The importance of accurate diagnosis in prostate and breast biopsies, together with the growing shortage of pathologists, makes a QC system as this extremely useful for diagnostic accuracy and safety. To the best of our knowledge, these are the first AI-based prostate and breast diagnostic systems running in a live clinical setting. In this talk, we will discuss the workflow in the laboratory and performance of the algorithms.
| First Deployment of an Artificial Intelligence-Based Solution for Cancer Detection in a US Pathology Laboratory|| |
Juan C. Santa Rosario1, Roei Harduf2, Yuval Raz2, Gev Decktor2, Vladi Bar-On2, Geraldine Sebag2, Joseph Mossel2
1CorePlus Pathology Lab, Carolina, Puerto Rico, USA, 2Ibex Medical Analytics Ltd., Tel Aviv, Israel.
E-mail: [email protected]
Background: CorePlus Servicios Clínicos y Patológicos in Puerto Rico, a leading pathology and clinical laboratory, handles 53,000 accessions annually, of which ~6.4% are prostate core needle biopsies (PCNBs), with ~46% diagnosed with cancer. Methods: Ibex Medical Analytics developed Galen™ Prostate, an artificial intelligence (AI)-based solution, to detect and grade prostate cancer in addition to other cell types and features such as perineural invasion, high-grade prostatic intraepithelial neoplasia, and inflammation. The algorithm was trained on >1 M image samples from multiple institutes manually annotated by senior pathologists. Galen Prostate is the first AI-based solution for cancer detection used in routine clinical practice, is deployed in pathology laboratories worldwide, and has identified missed cancers both retrospectively in France and prospectively in Israel. Results: A total of 101 retrospective prostate cases from CorePlus comprising a total of 1279 hematoxylin and eosin slides were analyzed by the algorithm, demonstrating high specificity and sensitivity for cancer detection (96.9% and 96.5%, respectively) and an area under the curve of 0.901 for differentiating low-grade (Gleason 6) from high-grade (Gleason 7+) cancer. The diagnosis of 11 slides was revised subsequent to algorithmic results. After validation, CorePlus deployed Galen prostate as quality control (QC) system on all new PCNBs. The system raises an alert for discrepancies between the algorithmic analysis and the pathologist, prompting a second human opinion. Conclusions: An AI-based QC system is extremely useful for diagnostic accuracy and safety. To the best of our knowledge, this is the first AI-based digital pathology diagnostic system deployed in a US laboratory and used in routine clinical practice.
| Digital Pathology Tools and the COVID-19 Pandemic: Insights and Practices from an Academic Institution|| |
Johanna Savage1, Giovanni Lujan1, Wendy Frankel1, Anil Parwani1, Arwa Shana'ah1, Martha Yearsley1, Diana Thomas1, Patricia Allenby1, José Otero1, Abberly Lott Limbach1, Xiaoyan Cui1, Rachel Scarl1, Tanner Hardy1, Jose Plaza1, Jesse Sheldon1, Bonnie Whitaker1, Zaibo Li1
1Department of Pathology, The Ohio State University, Columbus, OH, USA.
E-mail: [email protected]
Background: In the last 20 years, digital pathology emerged as an important tool for pathologists and has been critical for the continuation of clinical and scholarly work during the COVID-19 pandemic. Before the pandemic, we had successfully deployed whole-slide imaging (WSI) and other digital pathology tools for daily sign-out, education, and research at our institution. As such, we were well positioned to adapt to the changes in practice imposed by the pandemic. Methods: We conducted a survey of our pathologists and trainees to gather information about their pre- and post-COVID-19 use of digital pathology tools. Results of the survey were analyzed and representatives from each subspecialty commented on unique aspects of their workflow. Results and Conclusions: Before the COVID-19 pandemic, utilization of digital pathology tools was variable among pathologists and trainees. The most significant change occurred in the adaptation of digital pathology for faculty–trainee sign-out and educational conferences. Before the pandemic, most faculty–trainee interactions were in-person, utilizing glass slides and a multi-headed microscope. The pandemic, however, suspended all in-person sign-out and resulted in a greater use of digital pathology tools. Survey results demonstrate that both pathologists and trainees were satisfied with remote sign-out during this time, but most preferred returning to in-person interactions once restrictions are lifted. With the CMS waiver for off-site primary sign-out, some faculty utilized digital pathology tools from home. While most favored permanently allowing off-site primary sign-out, most indicated that they would use it sparingly or in unique circumstances once activities returned to normal.
| Deep Learning of Attention-Guided Multimodal Histopathology Search on Social Media|| |
Andrew J. Schaumberg1, 2, 3, Celina Stayerman4, Stephen Yip5, Sanjay Mukhopadhyay6, Laura G. Pastrián7, Mario Prieto Pozuelo8, Betul Duygu Sener9, Srinivas Rao Annavarapu10, Aurélien Morini11, Karra A. Jones12, Kathia Rosado-Orozco13, Carlos Miguel Ruiz14, Hongyu Yang15, Yale Rosen16, Olaleke O. Folaranmi17, Jerad M. Gardner18, John Gross19, Dauda E. Suleiman20, Fujisawa Takashi21, Nicole Riddle22, Mark Ong23, Matthew Cecchini24, Jean-Baptiste Gibier25, Lara Pijuan26, Ming Y. Lu1,2, Thomas J. Fuchs*27, Mariam Aly*28,29, Faisal Mahmood*1, 2, 3, 30
1Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA, 2Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA, 3Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA, 4TechniPath Laboratory, San Pedro Sula, Honduras, USA, 5Department of Pathology, BC Cancer, British Columbia, Canada, 6Department of Pathology, Cleveland Clinic, Cleveland, OH, USA, 7Department of Pathology, University Hospital La Paz, Madrid, Spain, 8Therapeutic Target Laboratory, University Hospital HM Sanchinarro, Madrid, Spain, 9Department of Pathology, Konya Training and Research Hospital, Konya, Turkey, 10Department of Cellular Pathology, Royal Victoria Infirmary, Newcastle upon Tyne, England, UK, 11Faculty of Medicine of Creteil, University Paris East Creteil, France, 12Department of Pathology, University of Iowa, IA, USA, 13HRP Labs, San Juan, Puerto Rico, USA, 14Department of Pathology, University Hospital of Álava, Vitoria, Spain, 15Department of Pathology, St Vincent Evansville Hospital, Evansville, IN, USA, 16Department of Pathology, SUNY Downstate Medical Center, NY, USA, 17Department of Pathology, University of Ilorin Teaching Hospital, Nigeria, 18Department of Laboratory Medicine, Geisinger Medical Center, Danville, PA, USA, 19Department of Pathology, Bone and Soft Tissue and Surgical Pathology, Johns Hopkins University, Baltimore, MD, USA, 20Department of Histopathology, Abubakar Tafawa Balewa University Teaching Hospital, Bauchi, Nigeria, 21Department of Diagnostic Pathology, Sapporo Teishinkai Hospital, Japan, 22Department of Pathology, Ruffolo, Hooper, and Associates, USF Health, Tampa, FL, USA, 23Department of Histopathology, St. Thomas' Hospital, London, United Kingdom, 24Department of Pathology and Laboratory Medicine, London Health Sciences Centre, London, ON, Canada, 25Department of Pathology, CHU de Lille, France, 26Pathology Department, Hospital del Mar, Barcelona, Spain, 27Pathology Department, Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, NY, USA, 28Department of Psychology, Columbia University, NY, USA, 29Affiliate Member of the Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA, 30Cancer Data Science, Dana-Farber Cancer Institute, Boston, MA.
E-mail: schaumberg.andrew+[email protected]
A pathologist's process of making a diagnosis is focused, holistic, and robust. A pathologist fully inspects all slides, carefully attends to informative regions within the slides, jointly considers all informative regions as context, and integrates available complementary clinical information or modalities. This process has been incredibly challenging to approximate with artificial intelligence (AI). It is an important goal because an AI that can accurately predict disease state and connect pathologists with similar patient cases has the promise of broadening consensus in patient care worldwide. Here, we propose an AI method that attempts to model aspects of a pathologist's diagnostic workflow. The AI considers one or more photomicrographs to describe a case, attends to regions of interest within the photomicrograph(s), jointly considers all regions of interest for an overall prediction, and includes possibly-missing covariates such as tissue type. Here, we refine our prior deep-learning technique that used photomicrographs shared on social media to predict disease state (nonneoplastic, benign, or malignant). Like the simplest baselines in our prior work, our proposed method uses 2412 hand-engineered features to represent a 224 × 224 region of interest in a photomicrograph, with additional inputs for ten tissue types and a marker-mention covariate [Figure 1]. We sample regions of interest in a 5 × 5 grid throughout a photomicrograph and feed these to a deep neural network. Each of these regions is fed through a fully-connected layer, and an attention-weighted sum is calculated across the regions, where all attention weights are normalized to sum to one. This attention process selects for informative parts of the slide that contribute to the disease state (e.g., malignancy), while selecting against less informative parts of the slide that do not contribute (e.g., slide background, normal tissue, pen, and emojis). Subsequent fully-connected layers integrate covariates, and a final prediction of disease state is made. Compared to our prior work, our new deep-learning method's performance approaches similar validation accuracy [60.16% vs. 54.66%, [Figure 2]]. Following an 80% training, 10% validation, and 10% test set split of our data, we report a test set accuracy of 52.63% (without covariates, this 52.63% drops to 43.94%). These accuracies are better than chance (41.01% ± 1.50% accuracy, random forest permutation test, 10-fold cross validation). Area under receiver operating characteristic is 0.6722, which is weak, but better than chance (0.5003 ± 0.0149, permutation test). Compared to our prior work, our proposed deep-learning method is a strikingly simpler baseline in that we use only 2412 hand-engineered features to represent an image, rather than convolutional features from a ResNet-50. In the future, we expect to improve performance further by using ResNet-50 features, in a manner similar to our prior work., As a search baseline, we report [email protected] = 1 of 0.4461 [Figure 3]. This was better than a simple L1 norm of the 2412 features (0.4143 ± 0.0071, 10-fold cross validation), but there is room to improve. In contrast, our prior work reported [email protected] = 1 of 0.7618 ± 0.0018 (via leave-one-pathologist-out cross validation), but this involved a complex blend of features, deep neural network ensembles, and a random forest. In the future, we expect our proposed attention-based methods to become an important component of a more complex histopathology search and disease prediction system. In conclusion, our simple proposed attention-based deep-learning method promises to become an important component of more sophisticated methods for disease prediction and search on social media, such as those in our prior work. We believe that this is the first use of attention-guided deep learning on histopathology data from social media. In the future, we expect this to improve the utility of “pathobot,” our AI-driven histopathology search tool on social media.
|Figure 1: Our deep-learning network takes N images (1...N) and clinical covariates (e.g., tissue type) as inputs. This network then learns an attention-weighted sum of image inputs as a set representation of a patient case. Like our prior work, the network ultimately learns to predict disease state|
Click here to view
|Figure 2: Deep-learning performance of 54.66% (dotted blue line) validation accuracy approaches our prior work of 60.16% (dotted red line). Solid blue line indicates validation accuracy progression through training|
Click here to view
|Figure 3: Search performance [email protected] = 1 on the test set is 0.4461, which is better than chance performance from our prior work (0.3967 ± 0.0044)|
Click here to view
This research was funded in part from BWH Pathology and NIH/NIGMS R35GM138216.
| References|| |
- Schaumberg AJ, Juarez-Nicanor WC, Choudhury SJ, Pastrián LG, Pritt BS, Prieto Pozuelo M, et al. Interpretable multimodal deep learning for real-time pan-tissue pan-disease pathology search on social media. Mod Pathol 2020;33:2169-85.
- He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. p. 770-8. https://ieeexplore.ieee.org/document/7780459.
- Lu MY, Williamson DFK, Chen TY, Chen RJ, Barbieri M, Mahmood F. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat Biomed Eng 2021;5:555-70. https://www.nature.com/articles/s41551-020-00682-w.
| Active Learning System for Digital Pathology: a New Tool for Interactive Optimization of Classifier Models|| |
Fahime Sheikhzadeh1, Justine Larsen1, Hadley Fellows1, Jim Martin1, Nidhin Murari1, Quincy Wong1, Mehrnoush Khojasteh1
1Imaging and Algorithms, Digital Pathology, Roche Tissue Diagnostics, Santa Clara, CA, USA.
E-mail: [email protected]
Introduction: Training machine-learning models for analyzing digital pathology images requires a large set of manually labeled ground truth, which is tedious and time-consuming to collect. Herein, we develop an active-learning system that guides the user to label samples that contribute the most to the learning performance of the model. This way, the model can be trained using less labeled data. Furthermore, the system enables users to train or optimize classifier models while iteratively collecting the ground truth. Active learning requires several different components including high-resolution image storage and retrieval, data visualization, interactive data correction, and image analysis for algorithm training and inferencing. This framework is available within dPath, a Roche Tissue Diagnostics Computational Pathology Research Platform. Methods: The designed system can be employed by pathologists or imaging scientists to collect the ground truth and train classifier models in an iterative manner. Different conventional machine learning or deep-learning models can be trained using this system. The user starts with labeling cells or regions of the tissue in the images. The labeled examples are used to train a model or optimize a pretrained model. The classification results and corresponding certainty heatmap are visualized. The user then labels more training samples from the most uncertain regions and retrains the classifier. This iteration continues until the model reaches the desired performance and can be deployed. A high-performance image server handles the requests for images from the front-end application. Using Roche's high-performance cluster, results from training and inferencing the model are generated and stored at scale in a database. APIs enable the interaction between the machine learning engine and the database. The dPath platform integrates all of these components and allows the user to train models within a web browser. Results: For proof of concept, we successfully trained a model to detect macrophages in multiplex immunofluorescence stained tissue images on dPath. The active-learning system enabled us to train a classifier with half the size of the labeled data needed in random sample selection approach. Conclusion: The digital pathology active learning system enables the end-user to optimize a classifier in an efficient and interactive manner.
| Automated Identification of Necrotic Regions in Digital Images of Multiplex Immunofluorescence-Stained Tissue Using Deep Learning|| |
A. Som1, F. Sheikhzadeh1, A. BenTaieb1, J. Baumann2, M. Khojasteh1
1Digital Pathology, Roche Tissue Diagnostics, Santa Clara, California, USA, 2Roche Tissue Diagnostics, Tucson, Arizona, USA.
E-mail: [email protected]
Introduction: Multiplex immunofluorescence (IF) staining of tumor specimens enables simultaneous assessment of different biomarkers in the tumor microenvironment (TME), which provides insights into tumor progression and responses to immunotherapy approaches. Digital pathology algorithms can quantify the expression and co-expression of biomarkers and characterize their spatial relationships in the TME on a cell-by-cell basis. These algorithms usually rely on manual identification of the tumor area in whole-slide images (WSIs) by a pathologist. Necrotic regions are also manually annotated to be excluded from phenotype detection and reporting. A necrotic region is a spectrum of morphological changes that follow cell death in living tissue. The variation in the size, shape, and number of necrotic regions across WSIs makes their manual annotation an error-prone and time-consuming task. Herein, we developed an automated deep learning–based algorithm for segmentation of necrosis regions in IF-stained WSIs. Methods: The deep-learning architecture that we employed is U-Net, which is a fully convolutional network and is fast to train and performs well on small training datasets. Our dataset consists of WSIs of colorectal tumor sections stained with a 5plex IF assay for CD3, CD8, CD68, PD-L1, and PanCK represented by Cy5, R6G, R610, FAM, and DCC fluorophores and DAPI as a counterstain. The necrotic regions are annotated on WSIs by a pathologist, and ground truth segmentation masks are generated. To train the model and assess its mask prediction performance, we used the following quantitative features: sum of binary cross-entropy loss and dice coefficient. Results: We used 28 WSIs (24 for training and 4 for testing) and selected random regions of interest (ROIs) on them. Then, we extracted image patches of fixed size (512 × 512 pixels) from ROIs (2126 patches for training and 230 for testing). We used DAPI and PanCK channels as the input to the U-Net model and ground-truth masks as the corresponding mask labels. From our initial experiments, we were able to achieve acceptable mask segmentation predictions both quantitatively and visually. Conclusion: Automated identification of necrotic regions by deep-learning algorithms enables faster and more reliable assessment of IF-stained WSIs by eliminating the need for manual annotation from pathologists.
| Computational Pathology: an Increasingly Growing Field That Requires Training and Interaction between Pathologists and Computer Scientists|| |
M. Tecilla1, F. Romero-Palomo1, C. Gámez Serna2, F. Arcadu2, Y. Cohen3, V. Schumacher1
1pRED Pharmaceutical Sciences, BIOmics and Pathology, Pathology, Digital Pathology and Tissue Technologies, Roche Pharmaceutical Research and Early Development (pRED), Basel, Switzerland, 2pRED Informatics, Safety and Early Development Informatics, Early Development Informatics Basel, Roche Pharmaceutical Research and Early Development (pRED), Basel, Switzerland, 3pRED Informatics, Safety and Early Development Informatics, Safety Informatics, Roche Pharmaceutical Research and Early Development (pRED), Basel, Switzerland.
E-mail: [email protected]
Background: Pathology is switching from an “analog” to a “digital” era. Digitalization is not limited to slide scanning; artificial intelligence (AI)-based tools and computer-aided diagnoses are increasingly used. In this context, pathologists will benefit from adjustments to training, enabling them to understand and use these technologies properly. Here, we report on our preliminary implementation of a computational pathology (CP) training program at Roche. Methods: Veterinary pathology (VP) and computer vision (CV) specialists worked together to improve joint knowledge in these fields. The work was divided into (1) review available resources, (2) identify and outline the scientific questions to be addressed with AI, and (3) “hands-on” assessment of the acquired knowledge by means of close VP/CV interaction in algorithm development or testing of commercially available software. Results: The review process led to creating a digital interactive AI digital glossary, with videos and links to external resources. During the “hands-on” experience with use case examples, python programming skills or in-depth knowledge of commercially available AI tools were acquired. Conclusions: The glossary was a useful resource for the pathology and informatics groups that will be extended to cover Unix and relevant commercial software. In our experience, Unix knowledge is a must-have in the CP algorithm development. Python knowledge improves pathologist/computer scientist communication and helps in developing complex tools; nevertheless, commercial tools may offset programming basic daily-based tasks.
| Visual Risk Pattern Digital Images Speed Rapid Triage of Multimorbid Patients to Avert Sequential, Interactive Multiple Critical Organ Failure|| |
Eleanor M. Travers1
1Private Medical Practice, Philadelphia, PA, USA.
E-mail: traver[email protected]
Background: Risk severity-ranked biomarkers are early “trigger signals” of interactive, critical organ/system (COS) dysfunction. The first dysfunctional COS rapidly induces multiple critical organ/system failure syndrome (MOFS),, and therapies are often too late, if MOFS already exists. Quantified, significantly abnormal (SA), biomarkers signify cell and tissue damage in each COS. Color-coded SA biomarkers are visual alerts for the Domino effect, where one COS failure affects others. Risk pattern severity levels detect cell and tissue damage in the first dysfunctional COS that causes sequential MOFS. Methods: Signal detection theory finds optimal combination of SA diagnostic biomarkers, quantified and clustered into “risk signal patterns” versus “noise” for each COS. Predictive diagnostic analytics isolates high-risk SA biomarker patterns in multimorbid patients, “trigger signals” of poor outcomes. Graphic reports of color-coded risk patterns are perceived faster by the physician's brain than numbers., Results: [Figure 1] shows the new patient-specific report. Conclusions: Multimorbid patient's risk for MOFS accelerates when pathologic biologic, toxic, physical, or endogenous agents cause vascular endothelial damage and immunologic dysregulation. Laboratory Medicine (LM) is the first to detect SA biomarkers and quantitate (rank) their severity with predictive diagnostic risk analytics. LM defines a patient's COS “point of maximal clinical risk” with color-coded SA. Visual diagnostic predictive informatics is a proactive LM patient safety tool for early detection of high-risk patients by translating data into knowledge with visual images.
|Figure 1: Visual risk pattern informatics speeds rapid triage of multimorbid patients|
Click here to view
| References|| |
- Ravi D, Wong C, Deligianni F, Berthelot M, Andreu-Perez J, Lo B, Yang GZ. Deep Learning for Health Informatics, IEEE Journal of Biomedical and Health Informatics 2017;21:4-21.
- Sandri M, Berchialla P, Baldi I, Gregori D, DeBlasi RA. A dynamic Bayesian networks to predict sequences of organ failures in patients admitted to ICU. J Biomed Inf 2014;48:106-13.
- Horn SD. Measuring severity: How sick is sick? How well is well? J Healthcare Financ Manag Assoc 1986;40:21, 24-32.
- Jaffe R. Domino effect principle. NEJM 1996;335:340.
- El-Kareh R, Hasan O, Schiff GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013;22:ii40-51.
- Reyna VF. A theory of medical decision making and health: Fuzzy trace theory. Med Dec Making 2008;28:850-65.
| Red Blood Cell Artifacts Identification in Multiplexed Immunofluorescence Image using Deep Learning|| |
Xingwei Wang1, Lei Tang2, Smadar Shiffman1, Margaret Zhao1, Auranuch Lorsakul1, Yao Nie1
1Imaging and Algorithms, Digital Pathology, R&D, RTD, Santa Clara, California, USA, 2Companion Diagnostics, RTD, Tucson, Arizona, USA.
E-mail: [email protected]
Background: Multiplexed immunofluorescence staining of tissue sections allows simultaneous detection of multiple biomarkers and their co-expression at individual cell level. However, red blood cell (RBC) artifacts generated by autofluorescence interfere with images analysis in multiple ways, which can cause: (1) false detection of nuclei in the DAPI channel; (2) misclassification of individual biomarkers; and (3) incorrect segmentation of tumor areas. The objective of this study was to identify RBC artifacts to improve the classification and segmentation accuracy of the biomarkers. Methods: As purely intensity-based methods for RBC identification have limited accuracy, a deep convolution neural network-based method for image segmentation (U-Net) was explored because it incorporates intensity as well as texture features through learning from image data. Field-of-view images were obtained from the whole slides, on which ground-truth region masks were marked for RBC artifacts and non-RBC objects. The image patches of size 256 × 256 were extracted from the field-of-view color images, which were generated from the original multiplexed immunofluorescence image. The details of training configurations are as follows: loss function is binary cross-entropy, and optimizer is Adam. Three experiments were performed: (1) using a single tissue indication (bladder, 152 patches) to train and test the network.; (2) using a combination of six tissue indications (colon, lung, breast, pancreas, gastric, and bladder, 275 patches); (3) combining six indications and negative patches where no RBC signal present (344 patches). In all experiments, 80% of the data were used for training and 20% of the data were used for testing. Results: Compared to the annotated ground truth, the training accuracy in the three experiments was 93.8%, 93.9%, and 95.2%, respectively. The training losses were 0.142, 0.142, and 0.113, respectively. The accuracy for the test data in the three experiments was 85.6%, 85.5%, and 88.3%, respectively. The losses for the test data were 0.149, 0.148, and 0.119, respectively. Four field-of-view images were randomly selected from the test dataset; the pixel-to-pixel agreement between the identified RBC mask and the ground-truth mask were 84%–100%. Conclusions: The proposed method has been shown to perform well in identifying RBC artifacts in multiplexed immunofluorescence images. Meanwhile, more diversified ground truth data including negative samples improved the overall performance.
| Digita Image Analysis of Tumor-Infiltrating Lymphocytes in Prostate Cancer using Automated Whole-Slide Imaging and Tissue Microarray|| |
Yan Xiang1, Kareem Hosny1, Priti Lal1
1Department of Pathology and Laboratory Medicine, The Hospital at the University of Pennsylvania, PA, USA. E-mail: [email protected]
Background: Among the genitourinary cancers, prostate cancer (PCa) is considered pauci immune, and it is well documented that not only the presence or absence but also the spatial and contextual arrangements of tumor-infiltrating lymphocytes (TILs) is the determinant of outcome. The objective of this study is to illustrate detailed and contextual evaluation of TILs on whole slides and corresponding tissue microarray (TMA) sections. Methods: A total of 80 PCa cases with their corresponding TMA slides were used for this analysis. For each case, a single most representative hematoxylin and eosin slide was scanned at ×40 using XY scanner. Tumor from the corresponding block was represented in triplicate on a TMA using 1 mm needle. An image-based algorithm was developed using QuPath-0.2.0-m8 software to assess both intratumoral TILs (IT-TILs) and peritumoral TILs (PT-TILs). Since the TMA sections were taken from the center of the tumor nodule, only IT-TILs could be assessed. We compared density of lymphocytes between the numbers calculated from WSI and TMA, respectively, using Spearman's rank correlation analysis and paired t-test. We also perform subgroup analyses based on different pretreatment PSA levels, clinical stages, pathological stages, biopsy Gleason scores, and therapy outcome. Results: We present the first spatial quantitative study of TILs in 80 primary prostate adenocarcinomas. In most patients, both PT-TILs and IT-TILS could be assessed by WSI; however, in the TMAs, the PT-TILs could not be evaluated at all due to systematic selection of intratumoral region during construction. No meaningful correlation was observed between density of lymphocytes calculated from WSI and TMA, respectively (r = −0.014). Conclusion: TMA-based quantitative immune cell counts were validated by automated paired TIL quantification in whole-slide cohorts. Our results suggest that whole-slide imaging by virtual microscopy is irreplaceable for TIL quantification, as a potential biomarker predicting and monitoring PCa treatment response. TMA analysis does not provide information of tumor microenvironment and TIL at peritumoral compartment of prostate cancer. Even while using three cores from each sample to construct TMA blocks, it is not sufficient to allow for coverage of biologic heterogeneity of the infiltrating lymphocytes in the intratumoral compartment of prostate adenocarcinomas. One potential solution might be the creation of image microarrays that would allow for capturing all of a tumor's morphologic variation on a single slide.
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10]