Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Scientists train deep-learning models to scrutinize biopsies like a human pathologist

Scientists train deep-learning models to scrutinize biopsies like a human pathologist

Newsdesk profile image
by Newsdesk

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility: After training on pathologists' slide-reviewing data, the PEAN model is capable of performing a multiclassification task and imitating the pathologists' slide-reviewing behaviors (see Panel a). The data distribution of the training dataset, internal testing dataset and external testing dataset are illustrated in Panel b, and the color legend representing various diseases applies to Panels c and d. The total number of patients with different skin conditions in the dataset are listed in Panel c. The quantity of slide-reviewing operations performed by the different pathologists is illustrated in Panel d. The "Overlap" column includes the images listed for each pathologist. Panel e depicts regions of interest as heatmaps (second row) in which the pathologist's gaze highly overlaps with the actual tumor tissue, marked in blue in the first row. Credit: Tianhang Nan, Northeastern University, China In the Age of AI, many health care providers dream of a digital assistant, unencumbered by fatigue, workload, burnout or hunger, that could provide a quick second opinion for medical decisions, including diagnoses, treatment plans and prescriptions. Today, the computing power and AI know-how are available to develop such assistants. However, replicating the expertise of a specially trained, highly experienced pathologist, radiologist or another specialist isn't easy or straightforward. AI algorithms, in particular, require vast amounts of data to create highly accurate models. And the more high-quality data, the better. For pathologists in particular, a method called pixel-wise manual annotation can be used with great success to train AI models to accurately diagnose specific diseases from tissue biopsy images. This method, however, requires a trained pathologist to annotate every pixel in a tissue biopsy image, outlining regions of interest for machine learning model training. The annotation burden for pathologists in this case is obvious and limits the amount of quality data that can be created for model training, thereby limiting the diagnostic precision of the eventual model. To address this challenge, a team of researchers led by scientists from the MedSight AI Research Lab, The First Hospital of China Medical University and the National Joint Engineering Research Center for Theranostics of Immunological Skin Diseases in Shenyang, China developed a method to annotate biopsy image data with eye-tracking devices, significantly reducing the burden of manually annotating every pixel of interest in a tissue biopsy image. The researchers published their study in Nature Communications on July 1. "To obtain pathologists' expertise with minimal pathologist workload, … we collect[ed] the image review patterns of pathologists [using] eye-tracking devices. Simultaneously, we design[ed] a deep learning system, Pathology Expertise Acquisition Network (PEAN), based on the collected visual patterns, which can decode pathologists' expertise [and] diagnose [whole slide images]," said Xiaoyu Cui, associate professor at the MedSight AI Research Lab in the College of Medicine and Biological Information Engineering at Northeastern University and senior author of the research paper. Specifically, the team hypothesized that the visual data obtained with eye-tracking devices while pathologists review tissue biopsy images can teach an AI model which areas are of particular interest in a biopsy image, providing a much less burdensome alternative to pixel-wise annotation. In this way, the team hoped to extract the pathologists' expertise in a much less labor-intensive way and generate much more data to develop and train more accurate deep learning-assisted diagnostic models. Operational demonstration of PEAN (1). Credit: Nature Communications (2025). DOI: 10.1038/s41467-025-60307-1 To achieve this, the team collected the slide-reviewing data from pathologists using custom-developed software and an eye-tracking device that reported the pathologists' eye movements, zooming and panning of whole-slide tissue images and the diagnoses for each sample. A total of 5,881 tissue samples encompassing five different types of skin lesions were reviewed. The PEAN system computes the "expertise values" for all areas in a tissue sample by simulating the pathologist's regions of interest by comparing the eye-tracking data to manual pixel annotation data of the same tissue biopsy images. With this training data, PEAN models could predict the suspicious regions of each biopsy image to imitate pathologists' expertise (PEAN-I) or train models to classify tissue sample diagnoses (PEAN-C). Remarkably, PEAN-C achieved an accuracy of 96.3% and an area under the curve (AUC) of 0.992, which measures how well a model can distinguish between positive and negative samples, when classifying samples it had been trained with and an accuracy of 93.0% and an AUC of 0.984 on tissue samples the system hadn't been trained on. PEAN-C managed to surpass the accuracy of the second-best AI classification by 5.5% using the same external testing set. The PEAN-I system, by imitating the expertise of pathologists, can additionally select regions of interest that can help other learning models more accurately diagnose tissue images. When three other learning models, CLAM, ABMIL, and TransMIL, were trained with tissue sample images generated by PEAN-I, the accuracy and AUC were increased significantly, with p-values of 0.0053 and 0.0161, respectively, as determined by paired t tests. "PEAN is not merely a new deep learning-based diagnosis system but a pioneering paradigm with the potential to revolutionize the current state of intelligent medical research. It can extract and quantify human diagnostic expertise, thereby overcoming common drawbacks of mainstream models, such as high human resource consumption and low trust from physicians," said Cui. The research team acknowledges that they have developed only a fraction of PEAN's potential for assisting health care providers with disease classification and lesion detection. In the future, the authors would like to apply PEAN to a range of downstream tasks, including personalized diagnosis, bionic humans and multimodal large predictive models. "As for the ultimate goal, we aim to develop a unique 'replica digital human' for each experienced pathologist using PEAN and large language models, … facilitated by PEAN's two major advantages: low data collection costs and advanced conceptual design, enabling easy, large-scale multimodal data collection," said Cui. More information: Tianhang Nan et al, Deep learning quantifies pathologists' visual patterns for whole slide image diagnosis, Nature Communications (2025). DOI: 10.1038/s41467-025-60307-1 Journal information: Nature Communications Provided by MedSight AI Research Lab


Source: Medical Xpress

Newsdesk profile image
by Newsdesk

Be informed!

Rx.news delivers critical healthcare intelligence to pharmaceutical professionals, biotech innovators, healthcare investors, medical practitioners, and policy makers.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More