LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 40

Search options

  1. Article: Graph Representation Forecasting of Patient's Medical Conditions: Toward a Digital Twin.

    Barbiero, Pietro / Viñas Torné, Ramon / Lió, Pietro

    Frontiers in genetics

    2021  Volume 12, Page(s) 652907

    Abstract: Objective: ...

    Abstract Objective:
    Language English
    Publishing date 2021-09-16
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2606823-0
    ISSN 1664-8021
    ISSN 1664-8021
    DOI 10.3389/fgene.2021.652907
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Digital Histopathology with Graph Neural Networks

    di Villaforesta, Alessandro Farace / Magister, Lucie Charlotte / Barbiero, Pietro / Liò, Pietro

    Concepts and Explanations for Clinicians

    2023  

    Abstract: To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks. We ... ...

    Abstract To address the challenge of the ``black-box" nature of deep learning in medical settings, we combine GCExplainer - an automated concept discovery solution - along with Logic Explained Networks to provide global explanations for Graph Neural Networks. We demonstrate this using a generally applicable graph construction and classification pipeline, involving panoptic segmentation with HoVer-Net and cancer prediction with Graph Convolution Networks. By training on H&E slides of breast cancer, we show promising results in offering explainable and trustworthy AI tools for clinicians.
    Keywords Physics - Medical Physics ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning ; Electrical Engineering and Systems Science - Image and Video Processing
    Publishing date 2023-12-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: Graph Representation Forecasting of Patient's Medical Conditions

    Pietro Barbiero / Ramon Viñas Torné / Pietro Lió

    Frontiers in Genetics, Vol

    Toward a Digital Twin

    2021  Volume 12

    Abstract: Objective: Modern medicine needs to shift from a wait and react, curative discipline to a preventative, interdisciplinary science aiming at providing personalized, systemic, and precise treatment plans to patients. To this purpose, we propose a “digital ... ...

    Abstract Objective: Modern medicine needs to shift from a wait and react, curative discipline to a preventative, interdisciplinary science aiming at providing personalized, systemic, and precise treatment plans to patients. To this purpose, we propose a “digital twin” of patients modeling the human body as a whole and providing a panoramic view over individuals' conditions.Methods: We propose a general framework that composes advanced artificial intelligence (AI) approaches and integrates mathematical modeling in order to provide a panoramic view over current and future pathophysiological conditions. Our modular architecture is based on a graph neural network (GNN) forecasting clinically relevant endpoints (such as blood pressure) and a generative adversarial network (GAN) providing a proof of concept of transcriptomic integrability.Results: We tested our digital twin model on two simulated clinical case studies combining information at organ, tissue, and cellular level. We provided a panoramic overview over current and future patient's conditions by monitoring and forecasting clinically relevant endpoints representing the evolution of patient's vital parameters using the GNN model. We showed how to use the GAN to generate multi-tissue expression data for blood and lung to find associations between cytokines conditioned on the expression of genes in the renin–angiotensin pathway. Our approach was to detect inflammatory cytokines, which are known to have effects on blood pressure and have previously been associated with SARS-CoV-2 infection (e.g., CXCR6, XCL1, and others).Significance: The graph representation of a computational patient has potential to solve important technological challenges in integrating multiscale computational modeling with AI. We believe that this work represents a step forward toward next-generation devices for precision and predictive medicine.
    Keywords digital twin ; generative adversarial networks ; monitoring ; graph representation learning ; precision medicine ; Genetics ; QH426-470
    Subject code 006
    Language English
    Publishing date 2021-09-01T00:00:00Z
    Publisher Frontiers Media S.A.
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: The Computational Patient has Diabetes and a COVID

    Barbiero, Pietro / Lió, Pietro

    medRxiv

    Abstract: Medicine is moving from reacting to a disease to prepare personalised and precision paths to well being. The complex and multi level pathophysiological patterns of most diseases require a systemic medicine approach and are challenging current medical ... ...

    Abstract Medicine is moving from reacting to a disease to prepare personalised and precision paths to well being. The complex and multi level pathophysiological patterns of most diseases require a systemic medicine approach and are challenging current medical therapies. Computational medicine is a vibrant interdisciplinary field that could help moving from an organ-centered to a process-oriented or systemic medicine data analysis. The resulting Computational patient may require an international interdisciplinary effort, probably of larger scientific and technological interdisciplinarity than the human genome sequencing. When deployed, it will have a profound impact on how healthcare is delivered to patients. Here we present a Computational patient model that integrates, refine and extend recent specific mechanistic or phenomenological models of cardiovascular, RAS and diabetic processes. Our aim is twofold: analyse the modularity and composability of the models-building blocks of the Computational patient and to study the dynamical properties of well-being and disease states in a broader functional context. We present results from a number of experiments among which we characterise the dynamical impact of covid-19 and T2D diabetes on cardiovascular and inflammation conditions. We tested these experiments under exercise and meals and drug regimen. We report results showing the striking importance of transient dynamical responses to acute state conditions and we provide guidelines for system design principle of the inter-relationship between modules and components for systemic medicine. Finally this initial Computational Patient can be used as a toolbox for further modifications and extensions.
    Keywords covid19
    Language English
    Publishing date 2020-06-12
    Publisher Cold Spring Harbor Laboratory Press
    Document type Article ; Online
    DOI 10.1101/2020.06.10.20127183
    Database COVID19

    Kategorien

  5. Book ; Online: The Computational Patient has Diabetes and a COVID

    Barbiero, Pietro / Lió, Pietro

    2020  

    Abstract: Medicine is moving from a curative discipline to a preventative discipline relying on personalised and precise treatment plans. The complex and multi level pathophysiological patterns of most diseases require a systemic medicine approach and are ... ...

    Abstract Medicine is moving from a curative discipline to a preventative discipline relying on personalised and precise treatment plans. The complex and multi level pathophysiological patterns of most diseases require a systemic medicine approach and are challenging current medical therapies. On the other hand, computational medicine is a vibrant interdisciplinary field that could help move from an organ-centered approach to a process-oriented approach. The ideal computational patient would require an international interdisciplinary effort, of larger scientific and technological interdisciplinarity than the Human Genome Project. When deployed, such a patient would have a profound impact on how healthcare is delivered to patients. Here we present a computational patient model that integrates, refines and extends recent mechanistic or phenomenological models of cardiovascular, RAS and diabetic processes. Our aim is twofold: analyse the modularity and composability of the model-building blocks of the computational patient and to study the dynamical properties of well-being and disease states in a broader functional context. We present results from a number of experiments among which we characterise the dynamic impact of COVID-19 and type-2 diabetes (T2D) on cardiovascular and inflammation conditions. We tested these experiments under different exercise, meal and drug regimens. We report results showing the striking importance of transient dynamical responses to acute state conditions and we provide guidelines for system design principles for the inter-relationship between modules and components in systemic medicine. Finally this initial computational Patient can be used as a toolbox for further modifications and extensions.

    Comment: 37 pages
    Keywords Computer Science - Computational Engineering ; Finance ; and Science ; Quantitative Biology - Quantitative Methods ; covid19
    Publishing date 2020-06-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: GCI

    Kazhdan, Dmitry / Dimanov, Botty / Magister, Lucie Charlotte / Barbiero, Pietro / Jamnik, Mateja / Lio, Pietro

    A (G)raph (C)oncept (I)nterpretation Framework

    2023  

    Abstract: Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks. An important challenge facing concept extraction approaches is the difficulty of interpreting ... ...

    Abstract Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks. An important challenge facing concept extraction approaches is the difficulty of interpreting and evaluating discovered concepts, especially for complex tasks such as molecular property prediction. We address this challenge by presenting GCI: a (G)raph (C)oncept (I)nterpretation framework, used for quantitatively measuring alignment between concepts discovered from Graph Neural Networks (GNNs) and their corresponding human interpretations. GCI encodes concept interpretations as functions, which can be used to quantitatively measure the alignment between a given interpretation and concept definition. We demonstrate four applications of GCI: (i) quantitatively evaluating concept extractors, (ii) measuring alignment between concept extractors and human interpretations, (iii) measuring the completeness of interpretations with respect to an end task and (iv) a practical application of GCI to molecular property prediction, in which we demonstrate how to use chemical functional groups to explain GNNs trained on molecular property prediction tasks, and implement interpretations with a 0.76 AUCROC completeness score.
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-02-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: From Charts to Atlas

    Crisostomi, Donato / Cannistraci, Irene / Moschella, Luca / Barbiero, Pietro / Ciccone, Marco / Liò, Pietro / Rodolà, Emanuele

    Merging Latent Spaces into One

    2023  

    Abstract: Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined ... ...

    Abstract Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined information. To this end, we introduce Relative Latent Space Aggregation, a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean. We carefully divide a classification problem into a series of learning tasks under three different settings: sharing samples, classes, or neither. We then train a model on each task and aggregate the resulting latent spaces. We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar. We then observe that the aggregated space is better suited for classification, and empirically demonstrate that it is due to the unique imprints left by task-specific embedders within the representations. We finally test our framework in scenarios where no shared region exists and show that it can still be used to merge the spaces, albeit with diminished benefits over naive merging.

    Comment: To appear in the NeurReps workshop @ NeurIPS 2023
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-11-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Categorical Foundations of Explainable AI

    Barbiero, Pietro / Fioravanti, Stefano / Giannini, Francesco / Tonda, Alberto / Lio, Pietro / Di Lavore, Elena

    2023  

    Abstract: Explainable AI (XAI) aims to address the human need for safe and reliable AI systems. However, numerous surveys emphasize the absence of a sound mathematical formalization of key XAI notions -- remarkably including the term ``\textit{explanation}'' which ...

    Abstract Explainable AI (XAI) aims to address the human need for safe and reliable AI systems. However, numerous surveys emphasize the absence of a sound mathematical formalization of key XAI notions -- remarkably including the term ``\textit{explanation}'' which still lacks a precise definition. To bridge this gap, this paper presents the first mathematically rigorous definitions of key XAI notions and processes, using the well-funded formalism of Category theory. We show that our categorical framework allows to: (i) model existing learning schemes and architectures, (ii) formally define the term ``explanation'', (iii) establish a theoretical basis for XAI taxonomies, and (iv) analyze commonly overlooked aspects of explaining methods. As a consequence, our categorical framework promotes the ethical and secure deployment of AI technologies as it represents a significant step towards a sound theoretical foundation of explainable AI.
    Keywords Computer Science - Artificial Intelligence ; Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2023-04-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: SHARCS

    Dominici, Gabriele / Barbiero, Pietro / Magister, Lucie Charlotte / Liò, Pietro / Simidjievski, Nikola

    Shared Concept Space for Explainable Multimodal Learning

    2023  

    Abstract: Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task. While various deep learning approaches have successfully ... ...

    Abstract Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task. While various deep learning approaches have successfully addressed these challenges, their reasoning process is often opaque; limiting the capabilities for a principled explainable cross-modal analysis and any domain-expert intervention. In this paper, we introduce SHARCS (SHARed Concept Space) -- a novel concept-based approach for explainable multimodal learning. SHARCS learns and maps interpretable concepts from different heterogeneous modalities into a single unified concept-manifold, which leads to an intuitive projection of semantically similar cross-modal concepts. We demonstrate that such an approach can lead to inherently explainable task predictions while also improving downstream predictive performance. Moreover, we show that SHARCS can operate and significantly outperform other approaches in practically significant scenarios, such as retrieval of missing modalities and cross-modal explanations. Our approach is model-agnostic and easily applicable to different types (and number) of modalities, thus advancing the development of effective, interpretable, and trustworthy multimodal approaches.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 004
    Publishing date 2023-07-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Everybody Needs a Little HELP

    Jürß, Jonas / Magister, Lucie Charlotte / Barbiero, Pietro / Liò, Pietro / Simidjievski, Nikola

    Explaining Graphs via Hierarchical Concepts

    2023  

    Abstract: Graph neural networks (GNNs) have led to major breakthroughs in a variety of domains such as drug discovery, social network analysis, and travel time estimation. However, they lack interpretability which hinders human trust and thereby deployment to ... ...

    Abstract Graph neural networks (GNNs) have led to major breakthroughs in a variety of domains such as drug discovery, social network analysis, and travel time estimation. However, they lack interpretability which hinders human trust and thereby deployment to settings with high-stakes decisions. A line of interpretable methods approach this by discovering a small set of relevant concepts as subgraphs in the last GNN layer that together explain the prediction. This can yield oversimplified explanations, failing to explain the interaction between GNN layers. To address this oversight, we provide HELP (Hierarchical Explainable Latent Pooling), a novel, inherently interpretable graph pooling approach that reveals how concepts from different GNN layers compose to new ones in later steps. HELP is more than 1-WL expressive and is the first non-spectral, end-to-end-learnable, hierarchical graph pooling method that can learn to pool a variable number of arbitrary connected components. We empirically demonstrate that it performs on-par with standard GCNs and popular pooling methods in terms of accuracy while yielding explanations that are aligned with expert knowledge in the domains of chemistry and social networks. In addition to a qualitative analysis, we employ concept completeness scores as well as concept conformity, a novel metric to measure the noise in discovered concepts, quantitatively verifying that the discovered concepts are significantly easier to fully understand than those from previous work. Our work represents a first step towards an understanding of graph neural networks that goes beyond a set of concepts from the final layer and instead explains the complex interplay of concepts on different levels.

    Comment: 33 pages, 16 figures, accepted at the NeurIPS 2023 GLFrontiers Workshop
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2023-11-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top