LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 35

Search options

  1. Article ; Online: Weighing the evidence in sharp-wave ripples.

    Linderman, Scott W

    Neuron

    2022  Volume 110, Issue 4, Page(s) 568–570

    Abstract: In this issue of Neuron, Krause and Drugowitsch (2022) present a novel approach to classifying sharp-wave ripples and find that far more encode spatial trajectories than previously thought. Their method compares a host of state-space models using what ... ...

    Abstract In this issue of Neuron, Krause and Drugowitsch (2022) present a novel approach to classifying sharp-wave ripples and find that far more encode spatial trajectories than previously thought. Their method compares a host of state-space models using what Bayesian statisticians call the model evidence.
    MeSH term(s) Action Potentials/physiology ; Bayes Theorem ; Neurons/physiology
    Language English
    Publishing date 2022-02-18
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't ; Comment
    ZDB-ID 808167-0
    ISSN 1097-4199 ; 0896-6273
    ISSN (online) 1097-4199
    ISSN 0896-6273
    DOI 10.1016/j.neuron.2022.01.036
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Revisiting Structured Variational Autoencoders

    Zhao, Yixiu / Linderman, Scott W.

    2023  

    Abstract: Structured variational autoencoders (SVAEs) combine probabilistic graphical model priors on latent variables, deep neural networks to link latent variables to observed data, and structure-exploiting algorithms for approximate posterior inference. These ... ...

    Abstract Structured variational autoencoders (SVAEs) combine probabilistic graphical model priors on latent variables, deep neural networks to link latent variables to observed data, and structure-exploiting algorithms for approximate posterior inference. These models are particularly appealing for sequential data, where the prior can capture temporal dependencies. However, despite their conceptual elegance, SVAEs have proven difficult to implement, and more general approaches have been favored in practice. Here, we revisit SVAEs using modern machine learning tools and demonstrate their advantages over more general alternatives in terms of both accuracy and efficiency. First, we develop a modern implementation for hardware acceleration, parallelization, and automatic differentiation of the message passing algorithms at the core of the SVAE. Second, we show that by exploiting structure in the prior, the SVAE learns more accurate models and posterior distributions, which translate into improved performance on prediction tasks. Third, we show how the SVAE can naturally handle missing data, and we leverage this ability to develop a novel, self-supervised training approach. Altogether, these results show that the time is ripe to revisit structured variational autoencoders.
    Keywords Statistics - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-05-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: Statistical neuroscience in the single trial limit.

    Williams, Alex H / Linderman, Scott W

    Current opinion in neurobiology

    2021  Volume 70, Page(s) 193–205

    Abstract: Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of ... ...

    Abstract Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
    MeSH term(s) Animals ; Behavior, Animal ; Neurons/physiology ; Neurosciences
    Language English
    Publishing date 2021-11-30
    Publishing country England
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't ; Review
    ZDB-ID 1078046-4
    ISSN 1873-6882 ; 0959-4388
    ISSN (online) 1873-6882
    ISSN 0959-4388
    DOI 10.1016/j.conb.2021.10.008
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article: Periodic hypothalamic attractor-like dynamics during the estrus cycle.

    Liu, Mengyu / Nair, Aditya / Linderman, Scott W / Anderson, David J

    bioRxiv : the preprint server for biology

    2023  

    Abstract: Cyclic changes in hormonal state are well-known to regulate mating behavior during the female reproductive cycle, but whether and how these changes affect the dynamics of neural activity in the female brain is largely unknown. The ventromedial ... ...

    Abstract Cyclic changes in hormonal state are well-known to regulate mating behavior during the female reproductive cycle, but whether and how these changes affect the dynamics of neural activity in the female brain is largely unknown. The ventromedial hypothalamus, ventro-lateral subdivision (VMHvl) contains a subpopulation of VMHvl
    Language English
    Publishing date 2023-05-22
    Publishing country United States
    Document type Preprint
    DOI 10.1101/2023.05.22.541741
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: Statistical Neuroscience in the Single Trial Limit

    Williams, Alex H. / Linderman, Scott W.

    2021  

    Abstract: Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state. In addition to investigating how noise and state ... ...

    Abstract Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic "noise" and systematic changes in the animal's cognitive and behavioral state. In addition to investigating how noise and state changes impact neural computation, statistical models of trial-to-trial variability are becoming increasingly important as experimentalists aspire to study naturalistic animal behaviors, which never repeat themselves exactly and may rarely do so even approximately. Estimating the basic features of neural response distributions may seem impossible in this trial-limited regime. Fortunately, by identifying and leveraging simplifying structure in neural data -- e.g. shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions -- statistical estimation often remains tractable in practice. We review recent advances in statistical neuroscience that illustrate this trend and have enabled novel insights into the trial-by-trial operation of neural circuits.

    Comment: 25 pages, 3 figures
    Keywords Quantitative Biology - Neurons and Cognition ; Statistics - Applications
    Subject code 310 ; 612
    Publishing date 2021-03-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article: Generalized Shape Metrics on Neural Representations.

    Williams, Alex H / Kunz, Erin / Kornblith, Simon / Linderman, Scott W

    Advances in neural information processing systems

    2022  Volume 34, Page(s) 4738–4750

    Abstract: Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or ... ...

    Abstract Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or biologically adapted to, similar tasks. A standardized set of analysis tools is now needed to identify how network-level covariates-such as architecture, anatomical brain region, and model organism-impact neural representations (hidden layer activations). Here, we provide a rigorous foundation for these analyses by defining a broad family of metric spaces that quantify representational dissimilarity. Using this framework, we modify existing representational similarity measures based on canonical correlation analysis and centered kernel alignment to satisfy the triangle inequality, formulate a novel metric that respects the inductive biases in convolutional layers, and identify approximate Euclidean embeddings that enable network representations to be incorporated into essentially any off-the-shelf machine learning method. We demonstrate these methods on large-scale datasets from biology (Allen Institute Brain Observatory) and deep learning (NAS-Bench-101). In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
    Language English
    Publishing date 2022-04-09
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1012320-9
    ISSN 1049-5258
    ISSN 1049-5258
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article: Point process models for sequence detection in high-dimensional neural spike trains.

    Williams, Alex H / Degleris, Anthony / Wang, Yixin / Linderman, Scott W

    Advances in neural information processing systems

    2022  Volume 33, Page(s) 14350–14361

    Abstract: Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. ... ...

    Abstract Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.
    Language English
    Publishing date 2022-01-10
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1012320-9
    ISSN 1049-5258
    ISSN 1049-5258
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Mice exhibit stochastic and efficient action switching during probabilistic decision making.

    Beron, Celia C / Neufeld, Shay Q / Linderman, Scott W / Sabatini, Bernardo L

    Proceedings of the National Academy of Sciences of the United States of America

    2022  Volume 119, Issue 15, Page(s) e2113961119

    Abstract: In probabilistic and nonstationary environments, individuals must use internal and external cues to flexibly make decisions that lead to desirable outcomes. To gain insight into the process by which animals choose between actions, we trained mice in a ... ...

    Abstract In probabilistic and nonstationary environments, individuals must use internal and external cues to flexibly make decisions that lead to desirable outcomes. To gain insight into the process by which animals choose between actions, we trained mice in a task with time-varying reward probabilities. In our implementation of such a two-armed bandit task, thirsty mice use information about recent action and action–outcome histories to choose between two ports that deliver water probabilistically. Here we comprehensively modeled choice behavior in this task, including the trial-to-trial changes in port selection, i.e., action switching behavior. We find that mouse behavior is, at times, deterministic and, at others, apparently stochastic. The behavior deviates from that of a theoretically optimal agent performing Bayesian inference in a hidden Markov model (HMM). We formulate a set of models based on logistic regression, reinforcement learning, and sticky Bayesian inference that we demonstrate are mathematically equivalent and that accurately describe mouse behavior. The switching behavior of mice in the task is captured in each model by a stochastic action policy, a history-dependent representation of action value, and a tendency to repeat actions despite incoming evidence. The models parsimoniously capture behavior across different environmental conditionals by varying the stickiness parameter, and like the mice, they achieve nearly maximal reward rates. These results indicate that mouse behavior reaches near-maximal performance with reduced action switching and can be described by a set of equivalent models with a small number of relatively fixed parameters.
    MeSH term(s) Animals ; Choice Behavior ; Decision Making ; Mice/psychology ; Reward ; Uncertainty
    Language English
    Publishing date 2022-04-06
    Publishing country United States
    Document type Journal Article
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2113961119
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Book ; Online: Switching Autoregressive Low-rank Tensor Models

    Lee, Hyun Dong / Warrington, Andrew / Glaser, Joshua I. / Linderman, Scott W.

    2023  

    Abstract: An important problem in time-series analysis is modeling systems with time-varying dynamics. Probabilistic models with joint continuous and discrete latent states offer interpretable, efficient, and experimentally useful descriptions of such data. ... ...

    Abstract An important problem in time-series analysis is modeling systems with time-varying dynamics. Probabilistic models with joint continuous and discrete latent states offer interpretable, efficient, and experimentally useful descriptions of such data. Commonly used models include autoregressive hidden Markov models (ARHMMs) and switching linear dynamical systems (SLDSs), each with its own advantages and disadvantages. ARHMMs permit exact inference and easy parameter estimation, but are parameter intensive when modeling long dependencies, and hence are prone to overfitting. In contrast, SLDSs can capture long-range dependencies in a parameter efficient way through Markovian latent dynamics, but present an intractable likelihood and a challenging parameter estimation task. In this paper, we propose switching autoregressive low-rank tensor (SALT) models, which retain the advantages of both approaches while ameliorating the weaknesses. SALT parameterizes the tensor of an ARHMM with a low-rank factorization to control the number of parameters and allow longer range dependencies without overfitting. We prove theoretical and discuss practical connections between SALT, linear dynamical systems, and SLDSs. We empirically demonstrate quantitative advantages of SALT models on a range of simulated and real prediction tasks, including behavioral and neural datasets. Furthermore, the learned low-rank tensor provides novel insights into temporal dependencies within each discrete state.
    Keywords Computer Science - Machine Learning ; Statistics - Methodology ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2023-06-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Simplified State Space Layers for Sequence Modeling

    Smith, Jimmy T. H. / Warrington, Andrew / Linderman, Scott W.

    2022  

    Abstract: Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high ... ...

    Abstract Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task.
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-08-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top