|
|
COMMENTARY |
|
|
|
J Pathol Inform 2021,
12:12 |
Commentary: Leveraging edge computing technology for digital pathology
Mustafa Yousif1, Ulysses G J Balis1, Anil V Parwani2, Liron Pantanowitz1
1 Department of Pathology, University of Michigan, Ann Arbor, MI 48109-2800, USA 2 Department of Pathology, Ohio State University, Columbus, OH 43210, USA
Date of Submission | 21-Dec-2020 |
Date of Decision | 09-Jan-2021 |
Date of Acceptance | 19-Jan-2021 |
Date of Web Publication | 22-Mar-2021 |
Correspondence Address: Dr. Mustafa Yousif University of Michigan, NCRC Bldg. 35, 2800 Plymouth Road, Ann Arbor, MI 48109 USA
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/jpi.jpi_112_20
How to cite this article: Yousif M, Balis UG, Parwani AV, Pantanowitz L. Commentary: Leveraging edge computing technology for digital pathology. J Pathol Inform 2021;12:12 |
A reliable telepathology system requires responsiveness to transmit and view high-resolution pathology images, often of large file size. Telepathology for clinical practice, therefore, requires low delays, high bandwidth, and fast image processing. Sacco et al. recently published an article entitled “On Edge Computing for Remote Pathology Consultations and Computations.”[1] This paper explains the power of edge computing technology in the field of telepathology to augment live remote microscope sessions. This group from Saint Louis in the USA developed a telepathology system called LiveMicro that facilitates remote collaboration and digital image sharing in addition to remote computation on live microscopic images. Contrary to other available telepathology solutions, by employing edge computing, their system integrates an image processing algorithm together with high-speed data transfer with a latency on the order of hundreds of milliseconds.[2] In addition to allowing remote control of microscopic images, this innovative system also provides benefits such as performing application-specific image processing and speeding up image transmission time, unlike other traditional cloud-based systems often plagued by bottlenecks from networks with high traffic.
Edge Computing and why we Need It | |  |
Edge (or fog) computing refers to enabling technology that permits computation to be performed at the edge of a network (i.e., closer to the location where it is required). This can be applied on downstream data on behalf of cloud services and on upstream data on behalf of Internet of Things (IoT) services.[3] The rationale is for computing to occur as close to the proximity of the data source as possible. One of the big advantages of cloud computing is data processing that is not time sensitive, which has greatly benefited the way we work, study, and live today.[4] However, traditional cloud computing is often inefficient at handling Big Data, owing to delayed access latency coupled with distant files and computational resources. This has been a particular challenge with several cloud-based telepathology solutions tasked with managing large whole-slide images. Therefore, it would be more efficient to process data generated during telepathology transactions at the edge of a network. In other words, digital files generated by whole-slide scanners would not first be transmitted to the cloud, but instead they would be consumed at the edge of the network [Figure 1]. This will allow for shorter response times and faster processing. Edge computing can also perform computation offloading, data storage, caching, and processing, as well as distribute requests and delivery of service to and from the cloud to the end user. When using artificial intelligence systems, edge computing can allow end users to access data output in real-time without waiting for lengthy data-intensive analyses to be carried out externally.[5] Using edge computing also offers an additional opportunity for raw, sensitive data to be processed locally and rendered secure before sending it to the cloud. Since edge computing can allow for compression to occur at the edge of a network, reducing file size in this manner can be of economic benefit. Challenges of employing edge computing include the requirement for more advanced infrastructure, greater (local) storage capacity, higher cost, increased maintenance demands, and security concerns as edge computing devices can directly collect private data from data owners.[3] | Figure 1: Edge computing paradigm showing the integration of WSI scanners with edge nodes and connected cloud. Edge computing is situated between the cloud and connected to smart end-devices where intermediary compute elements (Edge nodes) provide data management and communications services with low latency and real-time interactions to facilitate the execution of relevant applications. The devices have local computing capability with ubiquitous accessibility, as well as limited storage and processing. The cloud has unlimited storage and processing with high performance, availability, and latency [Last cited on 2020 Nov 13]
Click here to view |
There are several edge computing systems in use today. Examples of open-source systems include Apache Edgent,[6] OpenStack,[7] and EdgeX Foundry.[8] Edge computing business systems include Azure IoT Edge[9] and Amazon AWS Greengrass.[10] There are four essential technologies that enable edge computing.[11],[12] The first are virtual machines (VMs) and containers. VMs work perfectly with cloud computing. Containers run directly on top of the physical infrastructure and offer virtualization at the operating system level. Instead of waiting for a minute or so for a virtual machine to boot up, containers can start within a few milliseconds. Containers also save a lot of space since they can be constrained to the megabyte level. The second essential technique, that offers plug-and-play deployment, is Software-Defined Networking which simplifies network complexity. The third essential technique is Content Delivery/Distribution Network, which saves both bandwidth cost and page load time by offering data caching at the edge of the network. The fourth essential technique includes Cloudlets and Micro Data Centers,[12] which are used as the gateway between edge devices and the cloud.
Novelty of the LiveMicro telepathology System | |  |
In their paper, Sacco et al. developed a new edge computing-based telepathology system called LiveMicro.[1] Their application was made accessible via a web browser. Pathologists remotely accessed a microscope in a user-friendly manner through a web browser, which was the entry point for the entire system and acted as a portal through which users connected to the ecosystem and joined, started, or terminated one or multiple telepathology sessions. At the other end of the telepathology session, a computer ran a modified version of the Micro-Manager system, which is an open-source package for configuring and controlling a fairly large amount of commonly used microscopes. The modified Micro-Manager system plugged into a physical machine attached to a microscope to handle data marshaling between the network and microscope firmware. OpenSeadragon JavaScript library was used to manage slide visualization and a PyramidIo-based tool used to generate image tiles on demand. They used ffmpeg[13] to encode and transmit videos from Micro-Manager plug-in to the LiveMicro server, while on the web page, a WebRTC[14] interface was responsible for receiving and playing the video. The LiveMicro server (core) is where most of their telepathology application logic resides. These authors deployed their own Edge Cloud infrastructure between the web server and the Plugin. This was accomplished by modifying the open-source cloud computing platform Open-Stack.[7] Each end user of a telepathology session was associated with a VM that provided network and node functionality. Image and video processing of each telepathology session occurred across multiple VMs, which allowed each client to simultaneously perform different processing on the same digital image. Their architecture required two nodes: a controller node that managed infrastructure resources and a compute node located wherever VMs were installed. The controller node was forced to choose a node close to the requested microscope in order to guarantee low delay. Compression was not performed on the plug-in but in a second phase, allowing videos to be stored and retrieved at a later stage. Application of their lossy compression algorithm was customizable, depending on the compression quality setting (between 0/highest and 1/lowest compression) desired by the end user.
Due to the benefits of edge computing, LiveMicro was able to successfully couple real-time image sharing (telepathology) with live image processing (computation). The edge computing system described in their paper allowed most image processing to occur at the edge of their network, instead of at the core of the network as we typically see with cloud computing. The edge machines were much more powerful than conventional desktop personal computers. This circumvented the need to transmit large amounts of telepathology-generated data to the cloud for processing, data query, and analysis. Apart from enabling typical remote control functions of their microscope via firmware (e.g., panning, zooming, and focusing), these investigators simultaneously validated the use of remote computation algorithms during urgent telepathology sessions. Pathologists were hence able to analyze images captured in real time. In their study, the authors tested a variety of image algorithms on 100 samples such as stain normalization, automated tumor-to-margin measurements, and quantification of nuclei in whole-slide images. Every time an image was moved or the user zoomed in, image analysis was recomputed. The results of running a tumor-to-margin measurement algorithm were accurate compared to a tumor detection algorithm.
In conclusion, the edge computing paradigm allowed Sacco et al. to speed up image transmission as well as perform fast and intensive application-specific image processing and analysis.[1] The benefit of their innovative edge cloud infrastructure (or “cyber-human system” as they refer to it) not only improved the performance of their telepathology system, but allowed them to simultaneously leverage computational capacity when performing telepathology. LiveMicro has the potential of being a suitable solution for all telepathology scenarios, but this would need to be validated in practice for all modes of telepathology (i.e., static, live streaming, WSI). While edge computing worked well in the Sacco study for a few applications, it would be interesting to see if this type of setup would be feasible for a laboratory that has gone fully digital and transmits/processes/analyses every scanned slide at the edge of their network. It may not be feasible or cost-effective in this setting. This remains to be tested. Given these promising results, we anticipate that many more digital pathology solutions will invest in this distributed computing model whereby computing takes place in proximity to edge computing empowered microscopes and scanners where the data are being collected, rather than on a centralized server in the cloud.
References | |  |
1. | Sacco A, Esposito F, Marchetto G, Kolar G, Schwetye K. On edge computing for remote pathology consultations and computations. IEEE J Biomed Health Inform 2020;24:2523-34. |
2. | Chen J, Ran X. Deep learning with edge computing: A review. Proc IEEE 2019;107:1655-74. |
3. | Shi W, Cao J, Zhang Q, Li Y, Xu L. Edge computing: Vision and challenges. IEEE Internet Things J 2016;3:637-46. |
4. | |
5. | Zerbe N, Hufnagl P, Schlüns K. Distributed computing in image analysis using open source frameworks and application to image sharpness assessment of histological whole slide images. Diagn Pathol 2011;6 Suppl 1:S16. |
6. | |
7. | |
8. | |
9. | |
10. | |
11. | Cao J, Zhang Q, Shi W. Edge Computing: A Primer. Cham, Switzerland: Springer International Publishing; 2018. p. 4-5. |
12. | Ai Y, Peng M, Zhang K. Edge computing technologies for Internet of Things: A primer. Digit Commun Netw 201'8;4:77-86. |
13. | |
14. | |
[Figure 1]
|