Open Access
31 July 2023 Artificial intelligence and digital pathology: clinical promise and deployment considerations
Mark D. Zarella, David S. McClintock, Harsh Batra, Rama R. Gullapalli, Michael Valante, Vivian O. Tan, Shubham Dayal, Kei Shing Oh, Haydee Lara, Chris A. Garcia, Esther Abels
Author Affiliations +
Abstract

Artificial intelligence (AI) presents an opportunity in anatomic pathology to provide quantitative objective support to a traditionally subjective discipline, thereby enhancing clinical workflows and enriching diagnostic capabilities. AI requires access to digitized pathology materials, which, at present, are most commonly generated from the glass slide using whole-slide imaging. Models are developed collaboratively or sourced externally, and best practices suggest validation with internal datasets most closely resembling the data expected in practice. Although an array of AI models that provide operational support for pathology practices or improve diagnostic quality and capabilities has been described, most of them can be categorized into one or more discrete types. However, their function in the pathology workflow can vary, as a single algorithm may be appropriate for screening and triage, diagnostic assistance, virtual second opinion, or other uses depending on how it is implemented and validated. Despite the clinical promise of AI, the barriers to adoption have been numerous, to which inclusion of new stakeholders and expansion of reimbursement opportunities may be among the most impactful solutions.

1.

Introduction

Digital pathology (DP) is a blanket term encompassing the tools, systems, enabling infrastructure, and associated metadata used when digitizing pathology slides into whole-slide images (WSIs).1,2 Over the last 20 years, technology has advanced to the point at which WSIs (often gigabytes in size) can be captured in minutes and presented to a pathologist for multiple purposes, such as telepathology, research, education, and primary diagnosis (i.e., digital signout). Digitization has many potential benefits, both for pathology laboratories and patients, by introducing efficiencies in distribution/sharing, recall, and reuse of pathology assets. It also provides the required substrate for digital image analysis of histopathology. Although image analysis of WSIs is not new and has been in use for well over ten years,3 recent advances in machine learning and computing have greatly expanded image analysis performance and capabilities. Linking image analysis with other clinical data and concepts is termed “computational pathology” and has become heavily dependent on the use of artificial intelligence (AI), expanding the potential for dramatic improvements in pathology automation, diagnostic accuracy, and treatment guidance.1,4,5

Despite a rapid increase in published AI studies and commercial AI development,6 there has been relatively little movement on adopting AI in pathology practices. This has been attributed to general uncertainty about how to develop, clinically validate, and deploy algorithms; a relative lack of regulatory principles and guidance; a paucity of standardization leading to AI algorithm and system interoperability shortfalls; and trust issues arising from challenges with achieving algorithm generalizability, including reproducibility failures when applied to new datasets.7 Furthermore, AI deployment necessarily requires digitization, which itself can be expensive to implement and typically requires modification to existing anatomic pathology (AP) workflows.8 There has also been very little reimbursement opportunity to offset the costs associated with DP implementation, although new current procedural therapy (CPT) codes (effective January 2023) for digitizing slides for primary diagnosis have recently been introduced.9 Nevertheless, the potential for AI to improve patient care and to streamline workflows may be too compelling to ignore.

Although AI serves a number of potential areas in pathology, we present here an overview of AI in AP with a focus on its application to tissue-based image analysis in WSIs. We describe the unique challenges of creating and using AI within AP and delve into the clinical utility that the combination of digital pathology, computational pathology, and AI have to offer. We also discuss the impact that DP and AI will have on the “omics,” with a final discussion on the future of AP and the value that these technologies may have on the practice of pathology.

2.

DP Carries Unique Challenges for AI

When pathologists view slides through a microscope, they usually alternate between low- and high-magnification objectives to examine tissue at sufficient detail while balancing the amount of tissue viewable through the eyepieces. Viewing WSIs occurs in a similar manner, in which pathologists navigate by panning and zooming across a display to manage the tradeoff between detail and field of view (FOV). The interactive multi-resolution nature of WSI viewing makes histopathology quite different from how one views digital images in other medical disciplines and reflects the intrinsic requirements of viewing large expanses of tissue. AI faces a similar challenge in that resource limitations usually prevent gigapixel WSIs from being processed in their entirety without first separating them into much smaller tiles. Analogous to pathologist viewing, the FOV captured by a single tile depends on the resolution analyzed; models designed to analyze architectural features that span large tissue areas often benefit from lower resolution tiles covering more territory, whereas those needing access to fine-scale detail are usually restricted to smaller, higher resolution FOVs. Capturing both at the same time remains a challenge for AI in digital pathology. Occasionally, multi-resolution approaches are adopted to capture both the global overview and fine detail.1012 In situations in which a slide-level diagnosis is desired, AI results must be collected from potentially thousands of individual tiles in a slide and then combined to produce a single result.

The sizes of cells and structures can also be diagnostically informative, unlike image classification tasks in other domains that may be size-invariant. As a result, efforts to preserve size information by utilizing scale information from the image metadata, which can vary across whole-slide scanners, or by avoiding data augmentation techniques based on scaling are often considered when training AI algorithms in pathology. Likewise, color is usually an important feature in histopathology images as it may reflect biologically meaningful staining patterns and can even provide subtle cues about subcellular features. However, color can also be influenced by different AP laboratory preanalytical processes, such as tissue processing, staining technique, or the image acquisition process, making it challenging to distinguish between diagnostically relevant signals and acquisition artifacts. There have been numerous attempts to mitigate these effects by applying methods such as color normalization,13,14 incorporating color augmentation into the training,15 and emphasizing the value of training with diverse and representative datasets.16 The ability of a trained AI model to generalize to future datasets vitally relies on accounting for the sources of variability that may be introduced in the preanalytical phase.17 Notably, many of the issues that often plague AI performance do not affect pathologists, who are much more immune to variations in color, pixel size, and other insignificant attributes. Although AI can perform functions that are difficult or time-consuming for pathologists, it remains a challenge to develop AI models that are as robust as pathologists to variations that do not have diagnostic importance. Thus, much of the interest in AI in pathology is centered on leveraging the benefits of AI while maintaining safeguards to ensure that performance is not degraded.

Although the glass slide is typically thought to represent a 2D section, 3D has become an emerging area of interest in DP that has begun to influence the direction of AI. For example, cytopathology and hematopathology are two subspecialties that often rely on z-stacked WSIs to resolve depth.18,19 Likewise, work has been done with digitized serial sections to capture 3D tissue characteristics that, when paired with AI, have driven discovery.20 Additional digital microscopy modalities that capture tissue images in depth, such as light sheet microscopy21 and reflectance confocal microscopy,22 have demonstrated clinical utility in pathology. Extending AI to accommodate a third dimension has been a subject of much recent interest.

3.

Clinical Utility of Digital Pathology, Computational Pathology, and Artificial Intelligence

When new technology is introduced into healthcare, assessing its clinical utility is paramount prior to its adoption within clinical workflows. The term “clinical utility” is multifaceted and has many clinical, academic, ethical, and economic implications within healthcare; however, here it is best described as how one justifies the relevance and usefulness of a variety of novel technologies, testing methodologies, and treatments for patient care.23,24 In this sense, the factors driving the clinical utility for DP must be measured against not only how they impact the patient but also how they affect the operations of the AP laboratory, the effects on differing populations of patients, and the diagnostic ability of pathologists. Finally, the clinical utility of DP should not be assessed in a vacuum—although there are DP use cases that exist on their own, its true value becomes much more apparent when combined with computational pathology and AI tools.

3.1.

Assessing the Clinical Utility of AI in Pathology

As stated before, digitization serves as a necessary first step toward enabling AI within AP. Within an appropriate information technology (IT) infrastructure, whole slide imaging, paired with computational pathology and AI tools, has the potential to greatly automate diagnostic pathology workflows and create enhanced pathology reports (Fig. 1). Marked differences between traditional manual AP lab processes and AI-enabled workflows include the following:

  • 1. Automatic pathology case assembly and slide analysis: with digital slides, cases can be automatically assembled and AI used to analyze their constituent slides for scan quality, staining/cutting issues, discrepant tissue (floaters), and other histology artifacts; slides that fail these quality control (QC) steps can then be flagged for rescanning or other histology lab interventions.

  • 2. Intelligent case distribution: digital cases can be distributed based on customized rules created for each laboratory, such as for load-balancing case workloads across practices/practice sites, sending complex cases to different subspecialty pathologists, and delivering to specific AI models for automatic processing prior to pathologist review.

  • 3. Virtual assays using DP data: with multimodal AI analysis of patient data (e.g., radiology and pathology images, laboratory medicine results, patient demographics, etc.), novel computational assays can be created and presented to the pathologist at the time of case review/signout to improve diagnostic accuracy, predict disease recurrence/outcomes, and predict treatment options.

  • 4. Improved digital case review: with WSIs, pathologists now have the capability of using AI-augmented review of slide data, some in ways not possible with traditional microscopy, such as reviewing multiple serial tissue sections at once, overlaying differently stained slides upon each other, creation of virtual immunohistochemistry stains, and predicting which special/immunostains to order.

Fig. 1

Clinical impact of digital pathology, computational pathology, and AI on the traditional AP workflow: (a) traditional AP workflow and (b) AI-enhanced AP workflow. Dark blue denotes manual tasks, and light blue denotes AI-enhanced/automated tasks. Note that the AI-enhanced workflow assumes the application of many different AI tools used in tandem, with both the individual tools and the multi-staged clinical workflows all validated for clinical use by the pathology practice.

JMI_10_5_051802_f001.png

Of note, each of the above examples’ clinical utility may not be applicable to all pathology practices. Instead, each specific intended use of AI tools should be assessed to determine not only their feasibility within a specific laboratory practice but also how one would best validate the tool for clinical use for specific patient populations and pathology workflow methods.

Although the workflows presented in Fig. 1 represent an optimized, future-state view of AP, current-state AI algorithms under development or ready for deployment for AP labs are more focused in nature. In general, these algorithms can be functionally classified into four major categories:

  • 1. Detection algorithms, e.g., identification of anomalous regions, tumor/non-tumor areas, immunohistochemical (IHC) stains, biomarkers, inflammation, and other diagnostically interesting/suspicious regions of interest for pathologist review.

  • 2. Characterization algorithms, e.g., classification of histopathologic patterns, patient diagnoses, and other classification schema that can be combined with WSI data to characterize specific patient criteria that may or may not otherwise have been interpretable by the pathologist alone.

  • 3. Quantification algorithms, e.g., mitotic counts, tumor area/volume, percent of tissue/cells with specific features, inflammation scoring, IHC scoring, and other objective measurements that can lead to standardization of diagnostic criteria and improved diagnostic accuracy.

  • 4. Prediction algorithms, e.g., disease outcome, treatment response, molecular subtyping from WSIs, and other disease predictors not otherwise known from using traditional methods.

Although most individual algorithms fall primarily into one of these classifications (e.g., Fig. 2), many AI models are actually combinations of multiple algorithms that tackle different tasks. For example, an AI model that predicts the Gleason grade for prostate cancer may have algorithms that first detect the anomalous areas on the WSI as putative tumor and then characterize the glandular structures into the known patterns of prostatic adenocarcinoma. Finally, the model will then quantify the patterns to estimate the Gleason score. Alternatively, a similar model can be developed, perhaps using the same building blocks, to predict patient outcome directly without a Gleason grading intermediary.25

Fig. 2

Examples of three classes of AI employment in AP: (a) quantitative immunohistochemistry, in which cell nuclei are detected, and biomarker staining intensity is quantified; (b) heat map overlaid on a low power digital image of H&E stained tissue to direct the pathologist’s gaze to the presence of a histologic feature of interest; and (c) slide-level characterization of a prostate biopsy based on integration of regional classifications.

JMI_10_5_051802_f002.png

One should also consider that algorithms can be classified based on their clinical utility itself, i.e., the justifications given to how they will add value to patient care. Although there can be multiple reasons to justify incorporating an algorithm into a pathology practice, those most commonly used today in AP concern optimizing efficiency and improving quality. Criteria for optimizing efficiency include implementing algorithms to decrease turnaround time, triage patient cases by acuity, reduce the need for manual intervention, and eliminate tedious activities. For improving quality, the focus is typically on creating more reproducible histopathology outputs used to standardize pathology synoptic reporting or to inform predictive models. The net effect of these modifications may be a reduction in costs, which itself can be a major source of clinical utility for labs looking to make the leap to going digital.

Table 1 summarizes the advantages of AI in pathology and limitations that should be addressed to optimize its clinical utility. Notably, the quality of AI alone does not automatically translate into superior diagnostic quality or greater efficiency. One should note that there are use cases in which even a perfectly-performing algorithm provides little to no benefit for many laboratories or carries no cost savings, efficiency gain, or quality improvement. Ultimately, one must identify those use cases and deployments in which AI can markedly improve current state workflows and provide optimal clinical utility.

Table 1

Potential advantages and limitations of AI use in anatomic pathology.

Advantages
1. Decreased AP reporting turnaround times (TAT).
2. Customized workflows for the pathologist/practice.
3. Reduced non-value added work burden on the pathologist and laboratory staff.
4. Improved quantification and better standardization of histopathology criteria potentially leading to better patient outcome.
5. Provide novel insights not previously realized by manual pathologist review and interpretation.
6. Potential to better optimize pathologist interaction with the WSI.
Limitations
1. Lack of harmonized standards for AI.
2. Lack of adoption of current imaging standards for DP.
3. Inexperienced laboratory workforce with DP and AI, including a lack of accepted validation criteria for clinical use.
4. Lack of explainable AI, i.e., non-transparent or “black box” algorithms.
5. Generalizability shortfalls of AI algorithm across AP laboratories, scanners, and sites, including algorithm drift.
6. Lack of robust IT infrastructures that can support comprehensive AI-driven analyses.

3.2.

Employing AI in Pathology Workflows

In practice, one of the first considerations for employing AI in the laboratory is understanding how to integrate it into clinical or operational workflows. These typically take the form of operational algorithms that assist with data collection, case triage, screening, and QC and assistive algorithms that contribute through human-AI interaction. For example, an operational algorithm may be designed to determine whether slides produced by the laboratory or WSIs generated by a scanner are appropriate for evaluation and may even be designed to automatically trigger a recut or a rescan if found to be insufficient (Fig. 1).26 An operational algorithm may also be used in a screening capacity, in which cases or slides flagged by an algorithm as suspicious are triaged for pathologist review (or in the case of a second read after the pathologist makes their initial diagnosis, presented to them for re-review). Assistive algorithms, on the other hand, generally support a pathologist at the time of slide review, directing pathologists to potentially important regions of the slide, enhancing the slide viewing experience, or providing adjunct information not otherwise available to the pathologist during signout.

When operationalizing clinical AI models in pathology, there are many logistical issues that must be undertaken. For example, AI requires data sources to be integrated and fed to the model, with new data pipelines developed. If additional clinical data, separate from WSI, is required, new interfaces will be needed in the electronic health record (EHR) and/or the laboratory information system (LIS). Once an AI model output is produced, a decision then needs to be made regarding whether that data should go back to a clinical health information system (HIS), be it the LIS, EHR, or pathology image management system. In some cases, the pathologist may work independently within an AI platform without HIS integration, whereas in other cases, this will be required so the pathologist can best act on and incorporate the AI model outputs into their clinical workflow. Integration processes are still in their infancy, with standards for AI data exchange lacking overall.

In addition, there is the question of whether AI models should be considered to be part of an existing lab test, a new discrete lab test, or otherwise separate from lab testing in general. Current Clinical Laboratory Improvement Amendments (CLIA) regulations stipulate a clinical laboratory be defined as a “facility for the biological, microbiological, serological, chemical, immuno-hematological, hematological, biophysical, cytological, pathological, or other examination of materials derived from the human body for the purpose of providing information for the diagnosis, prevention, or treatment of any disease or impairment of, or the assessment of the health of, human beings.”27 From the authors’ own personal experiences, many debates have taken place over the past few years as to whether data derived from specimens should be considered to be “materials derived from the human body,” raising the question as to whether AI models used in pathology are subject to formal CLIA regulation. Until that matter is clearly settled within CLIA and other federal agencies, it is up to the performing laboratory implementing AI models to determine how best to validate any AI models used within their practice.

Finally, a single algorithm can have broad application supporting either operational or assistive applications, such as a screening, QC, or interactive real-time feedback. For example, grading a cancer based on histopathologic criteria often is subjective and in many cases can lead to either overgrading or undergrading of a patient’s tumor. AI-based cancer detection and grading tools therefore have the ability to be effective screening tools, interactive guides to assist the pathologist with interpretation,28 or virtual second opinions.29 In these situations, however, even though the algorithm is the same, each separate use case, otherwise known as the algorithm’s three different intended uses, must be validated. Importantly, the pathologist’s use of the model, rather than the model in isolation, should be evaluated, as pathologist behavior may also change in response to the introduction of these new tools.

3.3.

How Do We Optimize Clinical Utility Through Interoperability with Other Omics?

Although the prior discussion on the clinical utility of AI tools has been focused mostly on the field of pathology, the reality is that medicine is increasingly moving to multidisciplinary approaches that rely extensively on multiple diagnostic modalities. With the increasing complexity of clinical medicine, it is critical, now more than ever, to integrate the information content generated by the three major diagnostic modalities of clinical practice: radiology, pathology, and genomics. All of these diagnostic specialties can be considered “high-throughput” in terms of the raw patient data generated. For example, genomics produces a tremendous amount of sequencing data, such as deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and epigenetic data. Radiology generates large amounts of imaging data, whereas pathology is the key repository for various forms of lab-based patient data, including both AP and laboratory medicine. In addition, in each specialty, there are extensive post-processing steps required to convert raw data into actionable diagnostic content impactful for both patient care and populating each of their respective omics.

Radiomics deals with the implementation of high-throughput, fully automated, computerized extractions of multiple quantitative image features, such as shapes, texture and pattern analysis, and intensity analysis from radiological images and data. These would be details that were part of the “experience” of a radiologist in the past. However, by explicitly quantifying these features upfront in an automated manner, the hope is to reduce inter-observer variation for more uniform outcomes. In addition, availability of this data lends to the mining of these quantitative features to aid in repeatable diagnosis and prognostic assessments.3032 Similarly, “pathomics” aims to extract, mine, and perform data analysis of subvisual features from a histology slide. Automated extraction and analysis of histopathological features using deep learning-based measurements can provide novel insights into the diagnosis and prognosis of patients that are pathologist experience independent. The goal is to use annotated WSIs to generate quantitative data, correlated with the histomorphological analysis in conjunction with the macroscopic radiological findings. Radiomics and pathomics deal with interpretable quantitative patient data generated based on morphological assessment algorithms, whereas genomics curates patient sample data at a molecular level. High-throughput sequencing data from a patient can provide information about the mutations, copy number variations, and other complex genetic alterations driving the disease at molecular level. Molecular pathway level information enables a finer spatial and temporal resolution of the progression of a patient’s disease pattern, providing novel means to diagnose as well as monitor the disease.

A current challenge in the use of diagnostic data is the siloed nature of patient data, i.e., using clinical data outside of the primary discipline that generates it. In most healthcare centers, the majority of patient data is essentially trapped in separate HIS with minimal cross-talk of content and/or expertise. In the case of AI, this is particularly important given that the more complex the algorithms become, such as with multimodal AI, the more we need to enable a clinically actionable omics mindset. Integration of patient metadata, clinical imaging (including pathology images), treatment plans, proposed interventions, and clinical outcomes is crucial to creating an enterprise data solution within a healthcare system. The foundation of such an end-to-end solution lies in the architecture of data, databases, information systems, and platforms that underpin the various medical specialties.

However, technology is not the only factor at play here—clinicians have a critical role in driving the interoperability required for such a solution. Interoperability implies the establishment and utilization of various standards of data in any given specialty. To start, clinicians can demand the use of standards in their everyday work and support efforts to integrate patient diagnostic data within the various clinical systems in use. They can further provide the expertise required to understand the various elements of patient data generated routinely in their discipline while concurrently working with vendors who can integrate patient data within the hardware, database, and application layers. Interoperability between major clinical systems, including the EHR, LIS, picture archiving and communication systems, data warehouses, and AI platforms, must be seamless with interfaces well established to ensure precise, secure, accurate, reliable, and uninterrupted exchange of relevant patient data points. A robustly designed layered information system architecture will thus allow an omics approach to AI, ideally with the different players (industry, hospitals/healthcare organizations, academic leaders, and providers/patients) driving the process forward. Finally, multiple AI approaches have been applied for multimodal analysis of WSIs in combination with genomics, transcriptomics, radiomics, and other large databases.33 Thus, DP, in conjunction with an omics approach, can act as a key cornerstone to supplement the clinical utility of computational pathology and AI models within AP.

4.

Call to Action

4.1.

Who is Needed to Enable AI in Anatomical Pathology?

Similar to work in the omics, much of the current effort in pathology AI is inherently multidisciplinary, requiring contributions from data scientists, pathology informaticists, administrative and operational staff, regulatory and clinical affairs experts, laboratory representatives, and pathologists. Each of these stakeholders brings a unique perspective to help shape the design of AI use cases and modifications to pathology workflows, as well as to provide familiarity with the costs, benefits, and potential pitfalls of implementation. The ability to understand limitations of a model, its susceptibility to preanalytical variability, or the limitations of the dataset on which it was trained or tested helps with predicting failures of a model in practice, which may not be immediately evident to each of the other stakeholders. It is important to solicit input from all pertinent stakeholders at the earliest stages of model assessment to avoid unforeseen challenges down the line, with particular engagement from the users who will likely be interacting with it the most; this may require including representation specific workgroups (such as practices with subspecialty signout) and early engagement of laboratory technicians who may be directly interacting with the model. Data scientists and engineers should be heavily involved in the model evaluation process to help practitioners interpret the outputs and to settle on solutions. Informaticists will usually work closely with stakeholders when developing workflow modifications, especially when there is an expectation that the model will interact with, or reside within, existing IT infrastructure (e.g., interfacing with the EHR, LIS, or other systems). Cybersecurity, compliance, data management, computing, and ongoing algorithm monitoring and maintenance are also important considerations; informaticists are well poised to contribute to these discussions and engage IT where appropriate.

Another set of relationships that are important to manage are those that exist outside the institution. As commercial offerings continue to emerge, this presents an opportunity for laboratories to implement solutions many times already deployed at multiple institutions, eliminating the need to develop algorithms from scratch (although not obviating the responsibility of validation and assessment). Of note, many institutions have a strict set of rules governing vendor relationships, starting with the procurement and risk management processes and extending through IT governance, infrastructure review, and data management. Even absent strict rules, it is essential to manage vendor engagement in a way that is responsible, ethical, and patient-centered. This means due diligence on the part of the practice is required at every step of deployment, including the following:

  • 1. Early assessment of an AI model to support a specific use case, with assessment of the value (or utility) of that enhancement to the practice;

  • 2. Evaluation of model performance, typically using representative datasets from one’s own institution, and attempting to understand the factors that may result in model failure;

  • 3. Stewardship of patient data, especially impacting data sharing and access requirements by the vendor—if data sharing is a requirement, assessments should be performed to determine if institutional clinical data will be used for additional vendor development efforts, including potential future commercialization plans.

Finally, AI is very much at its earliest stages in pathology—it is burdensome for any single institution to be able to internally develop all of their potential AI needs. Therefore, vendor relationships are proving to be an important piece of the AI puzzle going forward, but overall they should be carefully managed, especially if co-development relationships are formed.34

4.2.

DP Adoption Must Increase to provide the Substrate for AI

It is currently estimated that there are 102,000 pathologists in practice today across 130 countries worldwide. The bigger challenge is that the resources are not evenly distributed as a function of the population served. A recent analysis showed that two-thirds of the pathologist workforce is located in just 10 countries.35 In countries where resource shortages are severe, such as Latin America and Africa, the use of AI tools has the potential to ease the pressures of these shortages by increasing diagnostic productivity and by providing elevated clinical decision support, thus increasing the quality of care. Importantly, AI tools must be introduced that are cost-effective, meet specific needs, and do not carry burdensome technical infrastructure requirements that cannot be met in low-resource settings. It is essential that the introduction of AI does not create greater health disparities, and therefore developing improved approaches to digitization, sharing, computation, as well as access to the relevant expertise, will represent an extremely important innovation.

There are many steps to take to encourage more widespread adoption, but some of these steps, including demonstration of clinical safety and acceptability for use, have already begun. For example, several peer-reviewed publications have shown concordance of primary digital slide diagnoses to traditional glass microscopy to demonstrate clinical safety,3638 and recent regulatory clearances and governing body guidelines show increasing support for DP in the clinical setting. For practices that wish to deploy AI, these have been important hurdles to clear to eliminate the need for parallel work streams that rely on the glass slide for diagnostics and the WSI for AI assessment. Ongoing work is needed to understand how DP and AI together can provide benefits not achieved by traditional best practices.

5.

Conclusion

Pathology relies on accurate and timely diagnoses, of which both criteria have the potential to be aided by AI. Through use cases that span quantification, grading, quality control, prediction, and prognosis, pathologists can leverage objective and reproducible evidence to support patient care much more readily than consensus opinion by fellow pathologists. It can also be used in an interactive way, guiding pathologists to suspicious regions and helping them to interpret images that may otherwise be difficult to interpret. Consequently, the role of AI in pathology is multifaceted, but it also represents challenges in deployment that span multiple disciplines, stakeholders, vendors, government bodies, and payors and require considerations well beyond the pathology workflow.

As AI research in pathology continues to grow with concomitant improvements in performance and expansion of use cases, pathologists are faced with a decision about an AI strategy, which centers on prioritization and risk. Cost considerations are also at the forefront as the costs of digitization and deployment remain significant, but a clear avenue for reimbursement has not yet been established.

Disclosures

The authors declare the following competing interests: M.V. is employed by Dell Technologies, V.O.T. and S.D. are employed by Leica Biosystems, and H.L. is employed by Alexion-AstraZeneca.

Acknowledgments

The authors would like to acknowledge support from Digital Pathology Association for assembling the contributors to this paper.

References

1. 

E. Abels et al., “Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the Digital Pathology Association,” J. Pathol., 249 (3), 286 –294 https://doi.org/10.1002/path.5331 (2019). Google Scholar

2. 

M. D. Zarella et al., “A practical guide to whole slide imaging: a white paper from the Digital Pathology Association,” Arch. Pathol. Lab. Med., 143 (2), 222 –234 https://doi.org/10.5858/arpa.2018-0343-RA APLMAS 0003-9985 (2019). Google Scholar

3. 

F. Aeffner et al., “Introduction to digital image analysis in whole-slide imaging: a white paper from the Digital Pathology Association,” J. Pathol. Inf., 10 (1), 9 https://doi.org/10.4103/jpi.jpi_82_18 (2019). Google Scholar

4. 

D. N. Louis et al., “Computational pathology: a path ahead,” Arch. Pathol. Lab. Med., 140 (1), 41 –50 https://doi.org/10.5858/arpa.2015-0093-SA APLMAS 0003-9985 (2016). Google Scholar

5. 

M. K. K. Niazi, A. V. Parwani and M. N. Gurcan, “Digital pathology and artificial intelligence,” Lancet Oncol., 20 (5), e253 –e261 https://doi.org/10.1016/S1470-2045(19)30154-8 LOANBN 1470-2045 (2019). Google Scholar

6. 

J. Y. Cheng et al., “Artificial intelligence development, deployment and regulation challenges in anatomic pathology,” Artificial Intelligence in Pathology: Principles and Applications, Elsevier, New York, United States (2023). Google Scholar

7. 

J. Y. Cheng et al., “Challenges in the development, deployment, and regulation of artificial intelligence in anatomic pathology,” Am. J. Pathol., 191 (10), 1684 –1692 https://doi.org/10.1016/j.ajpath.2020.10.018 AJPAA4 0002-9440 (2021). Google Scholar

8. 

M. G. Hanna et al., “Integrating digital pathology into clinical practice,” Mod. Pathol., 35 (2), 152 –164 https://doi.org/10.1038/s41379-021-00929-0 MODPEO 0893-3952 (2022). Google Scholar

9. 

, “CPT® editorial summary of panel actions May 2022,” https://www.ama-assn.org/system/files/may-2022-cpt-summary-panel-actions.pdf (2022). Google Scholar

10. 

M. van Rijthoven et al., “HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images,” Med. Image Anal., 68 101890 https://doi.org/10.1016/j.media.2020.101890 (2021). Google Scholar

11. 

C. Zhou et al., “Histopathology classification and localization of colorectal cancer using global labels by weakly supervised deep learning,” Comput. Med. Imaging Graph, 88 101861 https://doi.org/10.1016/j.compmedimag.2021.101861 (2021). Google Scholar

12. 

D.J. Ho et al., “Deep multi-magnification networks for multi-class breast cancer image segmentation,” Comput. Med. Imaging Graph, 88 101866 https://doi.org/10.1016/j.compmedimag.2021.101866 (2021). Google Scholar

13. 

S. Roy et al., “A study about color normalization methods for histopathology images,” Micron, 114 42 –61 https://doi.org/10.1016/j.micron.2018.07.005 MICNB2 0047-7206 (2018). Google Scholar

14. 

A. Sethi et al., “Empirical comparison of color normalization methods for epithelial-stromal classification in H and E images,” J. Pathol. Inf., 7 17 https://doi.org/10.4103/2153-3539.179984 (2016). Google Scholar

15. 

D. Tellez et al., “Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology,” Med. Image Anal., 58 101544 https://doi.org/10.1016/j.media.2019.101544 (2019). Google Scholar

16. 

A. Janowczyk and A. Madabhushi, “Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases,” J. Pathol. Inf., 7 29 https://doi.org/10.4103/2153-3539.186902 (2016). Google Scholar

17. 

R. Therrien and S. Doyle, “Role of training data variability on classifier performance and generalizability,” Proc. SPIE, 10581 1058109 https://doi.org/10.1117/12.2293919 PSISDG 0277-786X (2018). Google Scholar

18. 

M. G. Hanna et al., “Comparison of glass slides and various digital-slide modalities for cytopathology screening and interpretation,” Cancer Cytopathol., 125 (9), 701 –709 https://doi.org/10.1002/cncy.21880 (2017). Google Scholar

19. 

H. El Achi and J. D. Khoury, “Artificial intelligence and digital microscopy applications in diagnostic hematopathology,” Cancers-Basel, 12 (4), 797 https://doi.org/10.3390/cancers12040797 (2020). Google Scholar

20. 

A. L. Kiemen et al., “CODA: quantitative 3D reconstruction of large tissues at cellular resolution,” Nat. Methods, 19 (11), 1490 –1499 https://doi.org/10.1038/s41592-022-01650-9 1548-7091 (2022). Google Scholar

21. 

A. K. Glaser et al., “Light-sheet microscopy for slide-free non-destructive pathology of large clinical specimens,” Nat. Biomed. Eng., 1 (7), 84 https://doi.org/10.1038/s41551-017-0084 (2017). Google Scholar

22. 

A. Levine and O. Markowitz, “Introduction to reflectance confocal microscopy and its use in clinical practice,” JAAD Case Rep., 4 (10), 1014 –1023 https://doi.org/10.1016/j.jdcr.2018.09.019 (2018). Google Scholar

23. 

L. J. Lesko, I. Zineh and S. M. Huang, “What is clinical utility and why should we care?,” Clin. Pharmacol. Ther., 88 (6), 729 –733 https://doi.org/10.1038/clpt.2010.229 CLPTAT 0009-9236 (2010). Google Scholar

24. 

S. D. Grosse and M. J. Khoury, “What is the clinical utility of genetic testing?,” Genet. Med., 8 (7), 448 –450 https://doi.org/10.1097/01.gim.0000227935.26763.c6 (2006). Google Scholar

25. 

P. Walhagen et al., “AI-based prostate analysis system trained without human supervision to predict patient outcome from tissue samples,” J. Pathol. Inf., 13 100137 https://doi.org/10.1016/j.jpi.2022.100137 (2022). Google Scholar

26. 

M. D. Zarella and K. Rivera Alvarez, “High-throughput whole-slide scanning to enable large-scale data repository building,” J. Pathol., 257 (4), 383 –390 https://doi.org/10.1002/path.5923 (2022). Google Scholar

28. 

P. Raciti et al., “Clinical validation of artificial intelligence-augmented pathology diagnosis demonstrates significant gains in diagnostic accuracy in prostate cancer detection,” Arch. Pathol. Lab Med., https://doi.org/10.5858/arpa.2022-0066-OA (2022). Google Scholar

29. 

L. Pantanowitz et al., “An artificial intelligence algorithm for prostate cancer diagnosis in whole slide images of core needle biopsies: a blinded clinical validation and deployment study,” Lancet Digit. Health, 2 (8), e407 –e416 https://doi.org/10.1016/S2589-7500(20)30159-X (2020). Google Scholar

30. 

F. Han et al., “Texture feature analysis for computer-aided diagnosis on pulmonary nodules,” J. Digit. Imaging, 28 (1), 99 –115 https://doi.org/10.1007/s10278-014-9718-8 JDIMEW (2015). Google Scholar

31. 

V. Kumar et al., “Radiomics: the process and the challenges,” Magn. Reson. Imaging, 30 (9), 1234 –1248 https://doi.org/10.1016/j.mri.2012.06.010 MRIMDQ 0730-725X (2012). Google Scholar

32. 

F. Yang et al., “Machine learning for histologic subtype classification of non-small cell lung cancer: a retrospective multicenter radiomics study,” Front. Oncol., 10 608598 https://doi.org/10.3389/fonc.2020.608598 FRTOA7 0071-9676 (2020). Google Scholar

33. 

C. Lu, R. Shiradkar and Z. Liu, “Integrating pathomics with radiomics and genomics for cancer prognosis: a brief review,” Chin. J. Cancer Res., 33 (5), 563 –573 https://doi.org/10.21147/j.issn.1000-9604.2021.05.03 (2021). Google Scholar

34. 

L. Pantanowitz et al., “Rules of engagement: promoting academic-industry partnership in the era of digital pathology and artificial intelligence,” Acad. Pathol., 9 (1), 100026 https://doi.org/10.1016/j.acpath.2022.100026 (2022). Google Scholar

35. 

A. Bychkov and J. Fukuoka, “Evaluation of the global supply of pathologists,” Mod. Pathol., 35 1473 –1522 https://doi.org/10.1038/s41379-022-01050-6 MODPEO 0893-3952 (2022). Google Scholar

36. 

A. D. Borowsky et al., “Digital whole slide imaging compared with light microscopy for primary diagnosis in surgical pathology,” Arch. Pathol. Lab. Med., 144 (10), 1245 –1253 https://doi.org/10.5858/arpa.2019-0569-OA APLMAS 0003-9985 (2020). Google Scholar

37. 

M. G. Hanna et al., “Validation of a digital pathology system including remote review during the COVID-19 pandemic,” Mod. Pathol., 33 (11), 2115 –2127 https://doi.org/10.1038/s41379-020-0601-5 MODPEO 0893-3952 (2020). Google Scholar

38. 

S. Mukhopadhyay et al., “Whole slide imaging versus microscopy for primary diagnosis in surgical pathology: a multicenter blinded randomized noninferiority study of 1992 cases (pivotal study),” Am. J. Surg. Pathol., 42 (1), 39 –52 https://doi.org/10.1097/PAS.0000000000000948 (2018). Google Scholar

Biography

Mark D. Zarella is a scientific director in the Division of Computational Pathology and Artificial Intelligence at the Mayo Clinic. He has worked in the areas of digital pathology, AI, and informatics for over 10 years. He is a board member of the Digital Pathology Association and serves on the Digital Pathology and Computational Pathology committee and AI committee for the College of American Pathologists. His interests straddle optical imaging and AI, with a particular focus on translating these novel technologies to improve patient care.

David S. McClintock is the chair of the Division of Computational Pathology and Artificial Intelligence within the Department of Laboratory Medicine and Pathology at the Mayo Clinic (Rochester). Current professional interests include the use of machine learning and artificial intelligence tools to improve patient care, clinical laboratory workflows, operational efficiency, and scientific discovery. He is an active member of the Association for Pathology Informatics (API) and Digital Pathology Association (DPA), where he serves as the API Program Committee chair and is part of the DPA Education Committee.

Harsh Batra, MBBS, is currently a postdoctoral fellow at MD Anderson cancer center. His training and professional background as a pathologist spans 10 years in various large scale clinical settings. His present day research projects are focused on molecular immune-profiling using tissue-based assays, including IHC, multiplex technologies, RNAscope, sequencing technologies, multiple AI pathology platforms, and other high-plex platforms, etc., in breast cancers.

Rama R. Gullapalli, MD, PhD, is a physician-scientist in the Departments of Pathology, Chemical and Biological Engineering at the University of New Mexico (UNM). His clinical and research interests include next generation sequencing, digital pathology, clinical informatics and personalized medicine. His research lab brings to bear a wide variety of research methods such as high throughput sequencing, fluorescence optical techniques, bioinformatics, molecular biology, and systems biology to understand the pathogenesis of hepatobiliary cancers.

Michael Valante, MBA, BSBME, is the global healthcare business lead for Dell Technologies Unstructured Data Solutions and serves as the CTO for digital pathology. His professional background of +20 years has been focused on clinical informatics and medical imaging with global healthcare technology companies including Philips, Eclipsys and Bell Laboratories. He is a member of the Digital Pathology Association Foundation Board and the Educational Advisory Board for Imaging Technology News.

Vivian O. Tan, MD, is a computational pathologist within Medical & Scientific Affairs at LeicaBiosystems.

Shubham Dayal, PhD, MSRA, is the senior medical writer, Medical and Scientific Affairs at Leica Biosystems. He has developed scientific/regulatory content in the field of medical imaging, in vitro diagnostics, digital pathology, and advanced staining for more than 10 years. He is a member of American Medical Writers Association and Regulatory Affairs Professional Society.

Kei Shing Oh, MD, is an anatomic/clinical pathology resident at Mount Sinai Medical Center, with a keen interest in dermatopathology. Her primary focus is on digital pathology, 3D pathology, and artificial intelligence, and she is passionate about integrating these technologies to improve pathology workflow and education.

Haydee Lara is a biomarker lead at Alexion-AstraZeneca RDU. In this role, she develops and implements biomarker strategies in clinical trials, with a special interest in histopathology, medical imaging and image analysis. She was previously a tissue biomarker analyst at GSK where she developed and validated multiplex IHC/image analysis assays for exploratory biomarker analysis in clinical trials. She is also a member of the DPA Education Committee and contributed to the design of the Digital Pathology Online Certificate Program from DPA/NSH.

Chris A. Garcia is the medical director in the Division of Computational Pathology and Artificial Intelligence at the Mayo Clinic. He is a pathologist (AP/CP) with subspecialty training in pathology informatics. He has over 10 years experience working in industry (Philips Digital Pathology Solutions), for reference laboratories (Labcorp as a medical director and strategic director), and in academic medicine (currently at the Mayo Clinic). He is actively involved in the digital pathology and pathology informatics communities. He currently focuses on developing and integrating AI/ML solutions into clinical operations and practice in laboratory medicine and pathology.

Esther Abels, CEO SolarisRTC LLC, challenges the status quo to accelerate bringing innovative, emerging products to patients, globally by sharing her regulatory, clinical, quality, IVD and pharma knowledge gained over the last 25 years. She drives efforts for clarifying regulatory paths, reimbursement by collaborating with different pathology associations, healthcare providers, governments, and payers. In 2022 she was the president for the Digital Pathology Association, and has chaired its regulatory and standards taskforce, and co-founded the Pathology Innovation Collaborative Community.

© 2023 Society of Photo-Optical Instrumentation Engineers (SPIE)
Mark D. Zarella, David S. McClintock, Harsh Batra, Rama R. Gullapalli, Michael Valante, Vivian O. Tan, Shubham Dayal, Kei Shing Oh, Haydee Lara, Chris A. Garcia, and Esther Abels "Artificial intelligence and digital pathology: clinical promise and deployment considerations," Journal of Medical Imaging 10(5), 051802 (31 July 2023). https://doi.org/10.1117/1.JMI.10.5.051802
Received: 17 February 2023; Accepted: 29 June 2023; Published: 31 July 2023
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

Pathology

Evolutionary algorithms

Diagnostics

Data modeling

Process modeling

Algorithm development

Back to Top