LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 404

Search options

  1. Book: Handbook of Statistical Bioinformatics

    Lu, Henry Horng-Shing / Zhao, Hongyu / Wells, Martin T. / Schölkopf, Bernhard

    (Springer Handbooks of Computational Statistics)

    2022  

    Author's details Henry Horng-Shing Lu is a Professor at National Chiao Tung University (NCTU), which is merged with National Yang Ming Chiao Tung University (NYCU). He has served as the Vice President for Academic Affairs and Dean for College of Science in NCTU. He is an Elected Member of the International Statistical Institute and Principal Fellow of the High Education Academy. His research interests include statistics, data science, machine learning, image science, biomedical studies and industrial applications. Bernhard Schölkopf is a member of the Max Planck Society and the Director of the Department of Empirical Inference at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. He is also an Honorary Professor of Machine Learning at the Technical University Berlin. His scientific interests are in the field of machine learning and inference from empirical data, in particular, in machine learning methods for extracting statistical and causal regularities. Martin T. Wells is the C
    Series title Springer Handbooks of Computational Statistics
    Keywords computational statistics ; Computational Biology ; Biostatistics ; Statistical modeling ; Massive Data Sets ; Single-cell Analysis ; Network Analysis ; Systems biology ; probabilistic modeling ; statistical methods ; Computational Statistics ; Statistical Bioinformatics ; Statistical Modeling ; Systems Biology ; Probabilistic Modeling ; Bioinformatics ; Statistical Methods
    Language English
    Size 420 p.
    Edition 2
    Publisher Springer Verlag
    Document type Book
    Note PDA Manuell_17
    Format 160 x 241 x 29
    ISBN 9783662659014 ; 3662659018
    Database PDA

    Kategorien

  2. Book ; Online: Borges and AI

    Bottou, Léon / Schölkopf, Bernhard

    2023  

    Abstract: Many believe that Large Language Models (LLMs) open the era of Artificial Intelligence (AI). Some see opportunities while others see dangers. Yet both proponents and opponents grasp AI through the imagery popularised by science fiction. Will the machine ... ...

    Abstract Many believe that Large Language Models (LLMs) open the era of Artificial Intelligence (AI). Some see opportunities while others see dangers. Yet both proponents and opponents grasp AI through the imagery popularised by science fiction. Will the machine become sentient and rebel against its creators? Will we experience a paperclip apocalypse? Before answering such questions, we should first ask whether this mental imagery provides a good description of the phenomenon at hand. Understanding weather patterns through the moods of the gods only goes so far. The present paper instead advocates understanding LLMs and their connection to AI through the imagery of Jorge Luis Borges, a master of 20th century literature, forerunner of magical realism, and precursor to postmodern literature. This exercise leads to a new perspective that illuminates the relation between language modelling and artificial intelligence.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 501
    Publishing date 2023-09-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Causality for Machine Learning

    Schölkopf, Bernhard

    2019  

    Abstract: Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, ...

    Abstract Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Statistics - Machine Learning ; I.2 ; I.5 ; K.4
    Publishing date 2019-11-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: RAVEN

    Ghosh, Partha / Sanyal, Soubhik / Schmid, Cordelia / Schölkopf, Bernhard

    Rethinking Adversarial Video Generation with Efficient Tri-plane Networks

    2024  

    Abstract: We present a novel unconditional video generative model designed to address long-term spatial and temporal dependencies. To capture these dependencies, our approach incorporates a hybrid explicit-implicit tri-plane representation inspired by 3D-aware ... ...

    Abstract We present a novel unconditional video generative model designed to address long-term spatial and temporal dependencies. To capture these dependencies, our approach incorporates a hybrid explicit-implicit tri-plane representation inspired by 3D-aware generative frameworks developed for three-dimensional object representation and employs a singular latent code to model an entire video sequence. Individual video frames are then synthesized from an intermediate tri-plane representation, which itself is derived from the primary latent code. This novel strategy reduces computational complexity by a factor of $2$ as measured in FLOPs. Consequently, our approach facilitates the efficient and temporally coherent generation of videos. Moreover, our joint frame modeling approach, in contrast to autoregressive methods, mitigates the generation of visual artifacts. We further enhance the model's capabilities by integrating an optical flow-based module within our Generative Adversarial Network (GAN) based generator architecture, thereby compensating for the constraints imposed by a smaller generator size. As a result, our model is capable of synthesizing high-fidelity video clips at a resolution of $256\times256$ pixels, with durations extending to more than $5$ seconds at a frame rate of 30 fps. The efficacy and versatility of our approach are empirically validated through qualitative and quantitative assessments across three different datasets comprising both synthetic and real video clips.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2024-01-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: From Statistical to Causal Learning

    Schölkopf, Bernhard / von Kügelgen, Julius

    2022  

    Abstract: We describe basic ideas underlying research to build and understand artificially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of causality. Some of the hard open problems of machine ... ...

    Abstract We describe basic ideas underlying research to build and understand artificially intelligent systems: from symbolic approaches via statistical learning to interventional models relying on concepts of causality. Some of the hard open problems of machine learning and AI are intrinsically related to causality, and progress may require advances in our understanding of how to model and infer causality from data.

    Comment: To appear in the Proceedings of the International Congress of Mathematicians 2022, EMS Press. Both authors contributed equally to this work; names are listed in alphabetical order. 34 pages (28 content pages + references), 12 figures, 2 tables. arXiv admin note: text overlap with arXiv:1911.10500
    Keywords Computer Science - Artificial Intelligence ; Computer Science - Machine Learning ; Statistics - Machine Learning
    Publishing date 2022-04-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: Artificial intelligence: Learning to see and act.

    Schölkopf, Bernhard

    Nature

    2015  Volume 518, Issue 7540, Page(s) 486–487

    MeSH term(s) Artificial Intelligence ; Humans ; Reinforcement (Psychology) ; Video Games
    Language English
    Publishing date 2015-02-26
    Publishing country England
    Document type Comment ; Journal Article
    ZDB-ID 120714-3
    ISSN 1476-4687 ; 0028-0836
    ISSN (online) 1476-4687
    ISSN 0028-0836
    DOI 10.1038/518486a
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: On the Interventional Kullback-Leibler Divergence

    Wildberger, Jonas / Guo, Siyuan / Bhattacharyya, Arnab / Schölkopf, Bernhard

    2023  

    Abstract: Modern machine learning approaches excel in static settings where a large amount of i.i.d. training data are available for a given task. In a dynamic environment, though, an intelligent agent needs to be able to transfer knowledge and re-use learned ... ...

    Abstract Modern machine learning approaches excel in static settings where a large amount of i.i.d. training data are available for a given task. In a dynamic environment, though, an intelligent agent needs to be able to transfer knowledge and re-use learned components across domains. It has been argued that this may be possible through causal models, aiming to mirror the modularity of the real world in terms of independent causal mechanisms. However, the true causal structure underlying a given set of data is generally not identifiable, so it is desirable to have means to quantify differences between models (e.g., between the ground truth and an estimate), on both the observational and interventional level. In the present work, we introduce the Interventional Kullback-Leibler (IKL) divergence to quantify both structural and distributional differences between models based on a finite set of multi-environment distributions generated by interventions from the ground truth. Since we generally cannot quantify all differences between causal models for every finite set of interventional distributions, we propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2023-02-10
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Mo\^usai

    Schneider, Flavio / Jin, Zhijing / Schölkopf, Bernhard

    Text-to-Music Generation with Long-Context Latent Diffusion

    2023  

    Abstract: The recent surge in popularity of diffusion models for image generation has brought new attention to the potential of these models in other areas of media synthesis. One area that has yet to be fully explored is the application of diffusion models to ... ...

    Abstract The recent surge in popularity of diffusion models for image generation has brought new attention to the potential of these models in other areas of media synthesis. One area that has yet to be fully explored is the application of diffusion models to music generation. Music generation requires to handle multiple aspects, including the temporal dimension, long-term structure, multiple layers of overlapping sounds, and nuances that only trained listeners can detect. In our work, we investigate the potential of diffusion models for text-conditional music generation. We develop a cascading latent diffusion approach that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU. In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Music samples for this paper: https://bit.ly/anonymous-mousai; all music samples for all models: https://bit.ly/audio-diffusion; and codes: https://github.com/archinetai/audio-diffusion-pytorch

    Comment: Music samples for this paper: https://bit.ly/anonymous-mousai; all music samples for all models: https://bit.ly/audio-diffusion; and codes: https://github.com/archinetai/audio-diffusion-pytorch
    Keywords Computer Science - Computation and Language ; Computer Science - Machine Learning ; Computer Science - Sound ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 780
    Publishing date 2023-01-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Out-of-Variable Generalization for Discriminative Models

    Guo, Siyuan / Wildberger, Jonas / Schölkopf, Bernhard

    2023  

    Abstract: The ability of an agent to do well in new environments is a critical aspect of intelligence. In machine learning, this ability is known as $\textit{strong}$ or $\textit{out-of-distribution}$ generalization. However, merely considering differences in data ...

    Abstract The ability of an agent to do well in new environments is a critical aspect of intelligence. In machine learning, this ability is known as $\textit{strong}$ or $\textit{out-of-distribution}$ generalization. However, merely considering differences in data distributions is inadequate for fully capturing differences between learning environments. In the present paper, we investigate $\textit{out-of-variable}$ generalization, which pertains to an agent's generalization capabilities concerning environments with variables that were never jointly observed before. This skill closely reflects the process of animate learning: we, too, explore Nature by probing, observing, and measuring $\textit{subsets}$ of variables at any given time. Mathematically, $\textit{out-of-variable}$ generalization requires the efficient re-use of past marginal information, i.e., information over subsets of previously observed variables. We study this problem, focusing on prediction tasks across environments that contain overlapping, yet distinct, sets of causes. We show that after fitting a classifier, the residual distribution in one environment reveals the partial derivative of the true generating function with respect to the unobserved causal parent in that environment. We leverage this information and propose a method that exhibits non-trivial out-of-variable generalization performance when facing an overlapping, yet distinct, set of causal predictors.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2023-04-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: The Hessian perspective into the Nature of Convolutional Neural Networks

    Singh, Sidak Pal / Hofmann, Thomas / Schölkopf, Bernhard

    2023  

    Abstract: While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature -- through the perspective of their Hessian maps. The reason is that the loss ... ...

    Abstract While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature -- through the perspective of their Hessian maps. The reason is that the loss Hessian captures the pairwise interaction of parameters and therefore forms a natural ground to probe how the architectural aspects of CNN get manifested in its structure and properties. We develop a framework relying on Toeplitz representation of CNNs, and then utilize it to reveal the Hessian structure and, in particular, its rank. We prove tight upper bounds (with linear activations), which closely follow the empirical trend of the Hessian rank and hold in practice in more general settings. Overall, our work generalizes and establishes the key insight that, even in CNNs, the Hessian rank grows as the square root of the number of parameters.

    Comment: ICML 2023 conference proceedings
    Keywords Computer Science - Machine Learning ; Statistics - Machine Learning
    Publishing date 2023-05-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top