LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 6 of total 6

Search options

  1. Article ; Online: Active Learning for Discrete Latent Variable Models.

    Jha, Aditi / Ashwood, Zoe C / Pillow, Jonathan W

    Neural computation

    2024  Volume 36, Issue 3, Page(s) 437–474

    Abstract: Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, ... ...

    Abstract Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as mixtures of linear regressions (MLR). While it is well known that active learning confers no advantage for linear-gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a hidden Markov model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMMs and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful approach for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.
    Language English
    Publishing date 2024-02-16
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco_a_01646
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article: Inferring learning rules from animal decision-making.

    Ashwood, Zoe C / Roy, Nicholas A / Bak, Ji Hyun / Pillow, Jonathan W

    Advances in neural information processing systems

    2022  Volume 33, Page(s) 3442–3453

    Abstract: How do animals learn? This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to ... ...

    Abstract How do animals learn? This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Our method efficiently infers the trial-to-trial changes in an animal's policy, and decomposes those changes into a learning component and a noise component. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal's policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy. Whereas the average contribution of the conventional REINFORCE learning rule to the policy update for mice learning the International Brain Laboratory's task was just 30%, we found that adding baseline parameters allowed the learning rule to explain 92% of the animals' policy updates under our model. Intriguingly, the best-fitting learning rates and baseline values indicate that an animal's policy update, at each trial, does not occur in the direction that maximizes expected reward. Understanding how an animal transitions from chance-level to high-accuracy performance when learning a new task not only provides neuroscientists with insight into their animals, but also provides concrete examples of biological learning algorithms to the machine learning community.
    Language English
    Publishing date 2022-01-05
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1012320-9
    ISSN 1049-5258
    ISSN 1049-5258
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Bayesian Active Learning for Discrete Latent Variable Models

    Jha, Aditi / Ashwood, Zoe C. / Pillow, Jonathan W.

    2022  

    Abstract: Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, ... ...

    Abstract Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing a novel framework for maximum-mutual-information input selection for discrete latent variable regression models. We first apply our method to a class of models known as "mixtures of linear regressions" (MLR). While it is well known that active learning confers no advantage for linear-Gaussian regression models, we use Fisher information to show analytically that active learning can nevertheless achieve large gains for mixtures of such models, and we validate this improvement using both simulations and real-world data. We then consider a powerful class of temporally structured latent variable models given by a Hidden Markov Model (HMM) with generalized linear model (GLM) observations, which has recently been used to identify discrete states from animal decision-making data. We show that our method substantially reduces the amount of data needed to fit GLM-HMM, and outperforms a variety of approximate methods based on variational and amortized inference. Infomax learning for latent variable models thus offers a powerful for characterizing temporally structured latent states, with a wide variety of applications in neuroscience and beyond.

    Comment: 38 pages (including references and an appendix), 7 figures in main text
    Keywords Computer Science - Machine Learning ; Quantitative Biology - Neurons and Cognition ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2022-02-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Mice alternate between discrete strategies during perceptual decision-making.

    Ashwood, Zoe C / Roy, Nicholas A / Stone, Iris R / Urai, Anne E / Churchland, Anne K / Pouget, Alexandre / Pillow, Jonathan W

    Nature neuroscience

    2022  Volume 25, Issue 2, Page(s) 201–212

    Abstract: Classical models of perceptual decision-making assume that subjects use a single, consistent strategy to form decisions, or that decision-making strategies evolve slowly over time. Here we present new analyses suggesting that this common view is ... ...

    Abstract Classical models of perceptual decision-making assume that subjects use a single, consistent strategy to form decisions, or that decision-making strategies evolve slowly over time. Here we present new analyses suggesting that this common view is incorrect. We analyzed data from mouse and human decision-making experiments and found that choice behavior relies on an interplay among multiple interleaved strategies. These strategies, characterized by states in a hidden Markov model, persist for tens to hundreds of trials before switching, and often switch multiple times within a session. The identified decision-making strategies were highly consistent across mice and comprised a single 'engaged' state, in which decisions relied heavily on the sensory stimulus, and several biased states in which errors frequently occurred. These results provide a powerful alternate explanation for 'lapses' often observed in rodent behavioral experiments, and suggest that standard measures of performance mask the presence of major changes in strategy across trials.
    MeSH term(s) Animals ; Choice Behavior ; Decision Making ; Humans ; Mice
    Language English
    Publishing date 2022-02-07
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't
    ZDB-ID 1420596-8
    ISSN 1546-1726 ; 1097-6256
    ISSN (online) 1546-1726
    ISSN 1097-6256
    DOI 10.1038/s41593-021-01007-z
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: The false promise of simple information disclosure

    Ho, Daniel E / Ashwood, Zoe C / Handan-Nader, Cassandra

    new evidence on restaurant hygiene grading

    (Working paper / Stanford Institute for Economic Policy Research (SIEPR) ; no. 17, 043)

    2017  

    Author's details Daniel E. Ho, Zoe C. Ashwood, Cassandra Handan-Nader
    Series title Working paper / Stanford Institute for Economic Policy Research (SIEPR) ; no. 17, 043
    Language English
    Size 1 Online-Ressource (circa 80 Seiten), Illustrationen
    Publisher SIEPR, Stanford Institute for Economic Policy Research
    Publishing place Stanford, CA
    Document type Book ; Online
    Database ECONomics Information System

    More links

    Kategorien

  6. Article ; Online: Opponent control of behavior by dorsomedial striatal pathways depends on task demands and internal state.

    Bolkan, Scott S / Stone, Iris R / Pinto, Lucas / Ashwood, Zoe C / Iravedra Garcia, Jorge M / Herman, Alison L / Singh, Priyanka / Bandi, Akhil / Cox, Julia / Zimmerman, Christopher A / Cho, Jounhong Ryan / Engelhard, Ben / Pillow, Jonathan W / Witten, Ilana B

    Nature neuroscience

    2022  Volume 25, Issue 3, Page(s) 345–357

    Abstract: A classic view of the striatum holds that activity in direct and indirect pathways oppositely modulates motor output. Whether this involves direct control of movement, or reflects a cognitive process underlying movement, remains unresolved. Here we find ... ...

    Abstract A classic view of the striatum holds that activity in direct and indirect pathways oppositely modulates motor output. Whether this involves direct control of movement, or reflects a cognitive process underlying movement, remains unresolved. Here we find that strong, opponent control of behavior by the two pathways of the dorsomedial striatum depends on the cognitive requirements of a task. Furthermore, a latent state model (a hidden Markov model with generalized linear model observations) reveals that-even within a single task-the contribution of the two pathways to behavior is state dependent. Specifically, the two pathways have large contributions in one of two states associated with a strategy of evidence accumulation, compared to a state associated with a strategy of repeating previous choices. Thus, both the demands imposed by a task, as well as the internal state of mice when performing a task, determine whether dorsomedial striatum pathways provide strong and opponent control of behavior.
    MeSH term(s) Animals ; Behavior, Animal ; Choice Behavior ; Corpus Striatum/metabolism ; Mice ; Movement ; Neostriatum
    Language English
    Publishing date 2022-03-07
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S. ; Research Support, N.I.H., Extramural
    ZDB-ID 1420596-8
    ISSN 1546-1726 ; 1097-6256
    ISSN (online) 1546-1726
    ISSN 1097-6256
    DOI 10.1038/s41593-022-01021-9
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top