LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 4 of total 4

Search options

  1. Article ; Online: Graph attention-based fusion of pathology images and gene expression for prediction of cancer survival.

    Zheng, Yi / Conrad, Regan D / Green, Emily J / Burks, Eric J / Betke, Margrit / Beane, Jennifer E / Kolachalama, Vijaya B

    IEEE transactions on medical imaging

    2024  Volume PP

    Abstract: Multimodal machine learning models are being developed to analyze pathology images and other modalities, such as gene expression, to gain clinical and biological insights. However, most frameworks for multimodal data fusion do not fully account for the ... ...

    Abstract Multimodal machine learning models are being developed to analyze pathology images and other modalities, such as gene expression, to gain clinical and biological insights. However, most frameworks for multimodal data fusion do not fully account for the interactions between different modalities. Here, we present an attention-based fusion architecture that integrates a graph representation of pathology images with gene expression data and concomitantly learns from the fused information to predict patient-specific survival. In our approach, pathology images are represented as undirected graphs, and their embeddings are combined with embeddings of gene expression signatures using an attention mechanism to stratify tumors by patient survival. We show that our framework improves the survival prediction of human non-small cell lung cancers, outperforming existing state-of-the-art approaches that leverage multimodal data. Our framework can facilitate spatial molecular profiling to identify tumor heterogeneity using pathology images and gene expression data, complementing results obtained from more expensive spatial transcriptomic and proteomic technologies.
    Language English
    Publishing date 2024-04-08
    Publishing country United States
    Document type Journal Article
    ZDB-ID 622531-7
    ISSN 1558-254X ; 0278-0062
    ISSN (online) 1558-254X
    ISSN 0278-0062
    DOI 10.1109/TMI.2024.3386108
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Graph Perceiver Network for Lung Tumor and Bronchial Premalignant Lesion Stratification from Histopathology.

    Gindra, Rushin H / Zheng, Yi / Green, Emily J / Reid, Mary E / Mazzilli, Sarah A / Merrick, Daniel T / Burks, Eric J / Kolachalama, Vijaya B / Beane, Jennifer E

    The American journal of pathology

    2024  

    Abstract: Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. In this ... ...

    Abstract Bronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. In this context, we present a novel computational approach, the Graph Perceiver Network, leveraging hematoxylin and eosin-stained whole slide images to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. The Graph Perceiver Network outperforms existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma, and nontumor (normal) lung tissue on The Cancer Genome Atlas and Clinical Proteomic Tumor Analysis Consortium datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heat maps. The network was further tested using endobronchial biopsies from two data cohorts, containing normal to carcinoma in situ histology, and it demonstrated a unique capability to differentiate carcinoma in situ lung squamous PMLs based on their progression status to invasive carcinoma. The network may have utility in stratifying PMLs for chemoprevention trials or more aggressive follow-up.
    Language English
    Publishing date 2024-04-06
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2943-9
    ISSN 1525-2191 ; 0002-9440
    ISSN (online) 1525-2191
    ISSN 0002-9440
    DOI 10.1016/j.ajpath.2024.03.009
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: A Graph-Transformer for Whole Slide Image Classification.

    Zheng, Yi / Gindra, Rushin H / Green, Emily J / Burks, Eric J / Betke, Margrit / Beane, Jennifer E / Kolachalama, Vijaya B

    IEEE transactions on medical imaging

    2022  Volume 41, Issue 11, Page(s) 3003–3015

    Abstract: Deep learning is a powerful tool for whole slide image (WSI) analysis. Typically, when performing supervised deep learning, a WSI is divided into small patches, trained and the outcomes are aggregated to estimate disease grade. However, patch-based ... ...

    Abstract Deep learning is a powerful tool for whole slide image (WSI) analysis. Typically, when performing supervised deep learning, a WSI is divided into small patches, trained and the outcomes are aggregated to estimate disease grade. However, patch-based methods introduce label noise during training by assuming that each patch is independent with the same label as the WSI and neglect overall WSI-level information that is significant in disease grading. Here we present a Graph-Transformer (GT) that fuses a graph-based representation of an WSI and a vision transformer for processing pathology images, called GTP, to predict disease grade. We selected 4,818 WSIs from the Clinical Proteomic Tumor Analysis Consortium (CPTAC), the National Lung Screening Trial (NLST), and The Cancer Genome Atlas (TCGA), and used GTP to distinguish adenocarcinoma (LUAD) and squamous cell carcinoma (LSCC) from adjacent non-cancerous tissue (normal). First, using NLST data, we developed a contrastive learning framework to generate a feature extractor. This allowed us to compute feature vectors of individual WSI patches, which were used to represent the nodes of the graph followed by construction of the GTP framework. Our model trained on the CPTAC data achieved consistently high performance on three-label classification (normal versus LUAD versus LSCC: mean accuracy = 91.2 ± 2.5%) based on five-fold cross-validation, and mean accuracy = 82.3 ± 1.0% on external test data (TCGA). We also introduced a graph-based saliency mapping technique, called GraphCAM, that can identify regions that are highly associated with the class label. Our findings demonstrate GTP as an interpretable and effective deep learning framework for WSI-level classification.
    MeSH term(s) Proteomics ; Image Processing, Computer-Assisted/methods ; Guanosine Triphosphate
    Chemical Substances Guanosine Triphosphate (86-01-1)
    Language English
    Publishing date 2022-10-27
    Publishing country United States
    Document type Journal Article ; Research Support, U.S. Gov't, Non-P.H.S. ; Research Support, Non-U.S. Gov't ; Research Support, N.I.H., Extramural
    ZDB-ID 622531-7
    ISSN 1558-254X ; 0278-0062
    ISSN (online) 1558-254X
    ISSN 0278-0062
    DOI 10.1109/TMI.2022.3176598
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: A graph-transformer for whole slide image classification

    Zheng, Yi / Gindra, Rushin H. / Green, Emily J. / Burks, Eric J. / Betke, Margrit / Beane, Jennifer E. / Kolachalama, Vijaya B.

    2022  

    Abstract: Deep learning is a powerful tool for whole slide image (WSI) analysis. Typically, when performing supervised deep learning, a WSI is divided into small patches, trained and the outcomes are aggregated to estimate disease grade. However, patch-based ... ...

    Abstract Deep learning is a powerful tool for whole slide image (WSI) analysis. Typically, when performing supervised deep learning, a WSI is divided into small patches, trained and the outcomes are aggregated to estimate disease grade. However, patch-based methods introduce label noise during training by assuming that each patch is independent with the same label as the WSI and neglect overall WSI-level information that is significant in disease grading. Here we present a Graph-Transformer (GT) that fuses a graph-based representation of an WSI and a vision transformer for processing pathology images, called GTP, to predict disease grade. We selected $4,818$ WSIs from the Clinical Proteomic Tumor Analysis Consortium (CPTAC), the National Lung Screening Trial (NLST), and The Cancer Genome Atlas (TCGA), and used GTP to distinguish adenocarcinoma (LUAD) and squamous cell carcinoma (LSCC) from adjacent non-cancerous tissue (normal). First, using NLST data, we developed a contrastive learning framework to generate a feature extractor. This allowed us to compute feature vectors of individual WSI patches, which were used to represent the nodes of the graph followed by construction of the GTP framework. Our model trained on the CPTAC data achieved consistently high performance on three-label classification (normal versus LUAD versus LSCC: mean accuracy$= 91.2$ $\pm$ $2.5\%$) based on five-fold cross-validation, and mean accuracy $= 82.3$ $\pm$ $1.0\%$ on external test data (TCGA). We also introduced a graph-based saliency mapping technique, called GraphCAM, that can identify regions that are highly associated with the class label. Our findings demonstrate GTP as an interpretable and effective deep learning framework for WSI-level classification.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2022-05-19
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top