LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Your last searches

  1. AU="Lajoie, Guillaume"
  2. AU="Minard-Colin, Véronique"

Search results

Result 1 - 10 of total 54

Search options

  1. Article ; Online: Personalized inference for neurostimulation with meta-learning: a case study of vagus nerve stimulation.

    Mao, Ximeng / Chang, Yao-Chuan / Zanos, Stavros / Lajoie, Guillaume

    Journal of neural engineering

    2024  Volume 21, Issue 1

    Abstract: ... ...

    Abstract Objective
    MeSH term(s) Humans ; Vagus Nerve Stimulation/methods ; Bayes Theorem ; Vagus Nerve/physiology ; Action Potentials ; Evoked Potentials
    Language English
    Publishing date 2024-01-12
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2170901-4
    ISSN 1741-2552 ; 1741-2560
    ISSN (online) 1741-2552
    ISSN 1741-2560
    DOI 10.1088/1741-2552/ad17f4
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article: Assistive sensory-motor perturbations influence learned neural representations.

    Rajeswaran, Pavithra / Payeur, Alexandre / Lajoie, Guillaume / Orsborn, Amy L

    bioRxiv : the preprint server for biology

    2024  

    Abstract: Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex ... ...

    Abstract Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
    Language English
    Publishing date 2024-03-20
    Publishing country United States
    Document type Preprint
    DOI 10.1101/2024.03.20.585972
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Gaussian-process-based Bayesian optimization for neurostimulation interventions in rats.

    Choinière, Léo / Guay-Hottin, Rose / Picard, Rémi / Lajoie, Guillaume / Bonizzato, Marco / Dancause, Numa

    STAR protocols

    2024  Volume 5, Issue 1, Page(s) 102885

    Abstract: Effective neural stimulation requires adequate parametrization. Gaussian-process (GP)-based Bayesian optimization (BO) offers a framework to discover optimal stimulation parameters in real time. Here, we first provide a general protocol to deploy this ... ...

    Abstract Effective neural stimulation requires adequate parametrization. Gaussian-process (GP)-based Bayesian optimization (BO) offers a framework to discover optimal stimulation parameters in real time. Here, we first provide a general protocol to deploy this framework in neurostimulation interventions and follow by exemplifying its use in detail. Specifically, we describe the steps to implant rats with multi-channel electrode arrays in the hindlimb motor cortex. We then detail how to utilize the GP-BO algorithm to maximize evoked target movements, measured as electromyographic responses. For complete details on the use and execution of this protocol, please refer to Bonizzato and colleagues (2023).
    MeSH term(s) Animals ; Rats ; Bayes Theorem ; Algorithms
    Language English
    Publishing date 2024-02-14
    Publishing country United States
    Document type Journal Article
    ISSN 2666-1667
    ISSN (online) 2666-1667
    DOI 10.1016/j.xpro.2024.102885
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: How connectivity structure shapes rich and lazy learning in neural circuits.

    Liu, Yuhan Helena / Baratin, Aristide / Cornford, Jonathan / Mihalas, Stefan / Shea-Brown, Eric / Lajoie, Guillaume

    ArXiv

    2024  

    Abstract: In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) ...

    Abstract In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
    Language English
    Publishing date 2024-02-19
    Publishing country United States
    Document type Preprint
    ISSN 2331-8422
    ISSN (online) 2331-8422
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: Clarifying MCMC-based training of modern EBMs

    Gagnon, Léo / Lajoie, Guillaume

    Contrastive Divergence versus Maximum Likelihood

    2022  

    Abstract: The Energy-Based Model (EBM) framework is a very general approach to generative modeling that tries to learn and exploit probability distributions only defined though unnormalized scores. It has risen in popularity recently thanks to the impressive ... ...

    Abstract The Energy-Based Model (EBM) framework is a very general approach to generative modeling that tries to learn and exploit probability distributions only defined though unnormalized scores. It has risen in popularity recently thanks to the impressive results obtained in image generation by parameterizing the distribution with Convolutional Neural Networks (CNN). However, the motivation and theoretical foundations behind modern EBMs are often absent from recent papers and this sometimes results in some confusion. In particular, the theoretical justifications behind the popular MCMC-based learning algorithm Contrastive Divergence (CD) are often glossed over and we find that this leads to theoretical errors in recent influential papers (Du & Mordatch, 2019; Du et al., 2020). After offering a first-principles introduction of MCMC-based training, we argue that the learning algorithm they use can in fact not be described as CD and reinterpret theirs methods in light of a new interpretation. Finally, we discuss the implications of our new interpretation and provide some illustrative experiments.

    Comment: This work was done as a final project in the class IFT 6269 (Probabilistic Graphical Models) given by Simon Lacoste-Julien in Fall 2021 at Mila
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-02-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: Performance-gated deliberation: A context-adapted strategy in which urgency is opportunity cost.

    Puelma Touzel, Maximilian / Cisek, Paul / Lajoie, Guillaume

    PLoS computational biology

    2022  Volume 18, Issue 5, Page(s) e1010080

    Abstract: Finding the right amount of deliberation, between insufficient and excessive, is a hard decision making problem that depends on the value we place on our time. Average-reward, putatively encoded by tonic dopamine, serves in existing reinforcement ... ...

    Abstract Finding the right amount of deliberation, between insufficient and excessive, is a hard decision making problem that depends on the value we place on our time. Average-reward, putatively encoded by tonic dopamine, serves in existing reinforcement learning theory as the opportunity cost of time, including deliberation time. Importantly, this cost can itself vary with the environmental context and is not trivial to estimate. Here, we propose how the opportunity cost of deliberation can be estimated adaptively on multiple timescales to account for non-stationary contextual factors. We use it in a simple decision-making heuristic based on average-reward reinforcement learning (AR-RL) that we call Performance-Gated Deliberation (PGD). We propose PGD as a strategy used by animals wherein deliberation cost is implemented directly as urgency, a previously characterized neural signal effectively controlling the speed of the decision-making process. We show PGD outperforms AR-RL solutions in explaining behaviour and urgency of non-human primates in a context-varying random walk prediction task and is consistent with relative performance and urgency in a context-varying random dot motion task. We make readily testable predictions for both neural activity and behaviour.
    MeSH term(s) Animals ; Decision Making ; Dopamine ; Reinforcement, Psychology ; Reward ; Time Factors
    Chemical Substances Dopamine (VTD58H1Z2X)
    Language English
    Publishing date 2022-05-26
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2193340-6
    ISSN 1553-7358 ; 1553-734X
    ISSN (online) 1553-7358
    ISSN 1553-734X
    DOI 10.1371/journal.pcbi.1010080
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Flexible Phase Dynamics for Bio-Plausible Contrastive Learning

    Williams, Ezekiel / Bredenberg, Colin / Lajoie, Guillaume

    2023  

    Abstract: Many learning algorithms used as normative models in neuroscience or as candidate approaches for learning on neuromorphic chips learn by contrasting one set of network states with another. These Contrastive Learning (CL) algorithms are traditionally ... ...

    Abstract Many learning algorithms used as normative models in neuroscience or as candidate approaches for learning on neuromorphic chips learn by contrasting one set of network states with another. These Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics that could limit the range of physical systems capable of harnessing CL. In this study, we build on recent work exploring how CL might be implemented by biological or neurmorphic systems and show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed. Thanks to a set of general theorems corroborated by numerical experiments across several CL models, our results provide theoretical foundations for the study and development of CL methods for biological and neuromorphic neural networks.

    Comment: 23 pages, 4 figures. Paper accepted to ICML and update includes changes made based on reviewer feedback
    Keywords Computer Science - Machine Learning ; Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2023-02-23
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: Connectome-based reservoir computing with the conn2res toolbox.

    Suárez, Laura E / Mihalik, Agoston / Milisav, Filip / Marshall, Kenji / Li, Mingze / Vértes, Petra E / Lajoie, Guillaume / Misic, Bratislav

    Nature communications

    2024  Volume 15, Issue 1, Page(s) 656

    Abstract: The connection patterns of neural circuits form a complex network. How signaling in these circuits manifests as complex cognition and adaptive behaviour remains the central question in neuroscience. Concomitant advances in connectomics and artificial ... ...

    Abstract The connection patterns of neural circuits form a complex network. How signaling in these circuits manifests as complex cognition and adaptive behaviour remains the central question in neuroscience. Concomitant advances in connectomics and artificial intelligence open fundamentally new opportunities to understand how connection patterns shape computational capacity in biological brain networks. Reservoir computing is a versatile paradigm that uses high-dimensional, nonlinear dynamical systems to perform computations and approximate cognitive functions. Here we present conn2res: an open-source Python toolbox for implementing biological neural networks as artificial neural networks. conn2res is modular, allowing arbitrary network architecture and dynamics to be imposed. The toolbox allows researchers to input connectomes reconstructed using multiple techniques, from tract tracing to noninvasive diffusion imaging, and to impose multiple dynamical systems, from spiking neurons to memristive dynamics. The versatility of the conn2res toolbox allows us to ask new questions at the confluence of neuroscience and artificial intelligence. By reconceptualizing function as computation, conn2res sets the stage for a more mechanistic understanding of structure-function relationships in brain networks.
    MeSH term(s) Artificial Intelligence ; Connectome ; Adaptation, Psychological ; Brain/diagnostic imaging ; Cognition
    Language English
    Publishing date 2024-01-22
    Publishing country England
    Document type Journal Article
    ZDB-ID 2553671-0
    ISSN 2041-1723 ; 2041-1723
    ISSN (online) 2041-1723
    ISSN 2041-1723
    DOI 10.1038/s41467-024-44900-4
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Sources of richness and ineffability for phenomenally conscious states.

    Ji, Xu / Elmoznino, Eric / Deane, George / Constant, Axel / Dumas, Guillaume / Lajoie, Guillaume / Simon, Jonathan / Bengio, Yoshua

    Neuroscience of consciousness

    2024  Volume 2024, Issue 1, Page(s) niae001

    Abstract: Conscious states-state that there is something it is like to be in-seem both rich or full of detail and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates ...

    Abstract Conscious states-state that there is something it is like to be in-seem both rich or full of detail and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other. While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation of the richness and ineffability of conscious experience-two important aspects that seem to be part of what makes qualitative character so puzzling.
    Language English
    Publishing date 2024-03-01
    Publishing country England
    Document type Journal Article
    ZDB-ID 2815642-0
    ISSN 2057-2107 ; 2057-2107
    ISSN (online) 2057-2107
    ISSN 2057-2107
    DOI 10.1093/nc/niae001
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Book ; Online: Lazy vs hasty

    George, Thomas / Lajoie, Guillaume / Baratin, Aristide

    linearization in deep networks impacts learning schedule based on example difficulty

    2022  

    Abstract: Among attempts at giving a theoretical account of the success of deep neural networks, a recent line of work has identified a so-called lazy training regime in which the network can be well approximated by its linearization around initialization. Here we ...

    Abstract Among attempts at giving a theoretical account of the success of deep neural networks, a recent line of work has identified a so-called lazy training regime in which the network can be well approximated by its linearization around initialization. Here we investigate the comparative effect of the lazy (linear) and feature learning (non-linear) regimes on subgroups of examples based on their difficulty. Specifically, we show that easier examples are given more weight in feature learning mode, resulting in faster training compared to more difficult ones. In other words, the non-linear dynamics tends to sequentialize the learning of examples of increasing difficulty. We illustrate this phenomenon across different ways to quantify example difficulty, including c-score, label noise, and in the presence of easy-to-learn spurious correlations. Our results reveal a new understanding of how deep networks prioritize resources across example difficulty.

    Comment: 25 pages, 14 figures
    Keywords Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2022-09-19
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top