LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 32

Search options

  1. Book ; Online: Emergent neural computation from the interaction of different forms of plasticity

    Zenke, Friedemann / Gilson, Matthieu / Savin, Cristina

    2016  

    Abstract: From the propagation of neural activity through synapses, to the integration of signals in the dendritic arbor, and the processes determining action potential generation, virtually all aspects of neural processing are plastic. This plasticity underlies ... ...

    Abstract From the propagation of neural activity through synapses, to the integration of signals in the dendritic arbor, and the processes determining action potential generation, virtually all aspects of neural processing are plastic. This plasticity underlies the remarkable versatility and robustness of cortical circuits: it enables the brain to learn regularities in its sensory inputs, to remember the past, and to recover function after injury. While much of the research into learning and memory has focused on forms of Hebbian plasticity at excitatory synapses (LTD/LTP, STDP), several other plasticity mechanisms have been characterized experimentally, including the plasticity of inhibitory circuits (Kullmann, 2012), synaptic scaling (Turrigiano, 2011) and intrinsic plasticity (Zhang and Linden, 2003). However, our current understanding of the computational roles of these plasticity mechanisms remains rudimentary at best. While traditionally they are assumed to serve a homeostatic purpose, counterbalancing the destabilizing effects of Hebbian learning, recent work suggests that they can have a profound impact on circuit function (Savin 2010, Vogels 2011, Keck 2012). Hence, theoretical investigation into the functional implications of these mechanisms may shed new light on the computational principles at work in neural circuits. This Research Topic of Frontiers in Computational Neuroscience aims to bring together recent advances in theoretical modeling of different plasticity mechanisms and of their contributions to circuit function. Topics of interest include the computational roles of plasticity of inhibitory circuitry, metaplasticity, synaptic scaling, intrinsic plasticity, plasticity within the dendritic arbor and in particular studies on the interplay between homeostatic and Hebbian plasticity, and their joint contribution to network function
    Keywords Neurosciences. Biological psychiatry. Neuropsychiatry ; Science (General)
    Size 1 electronic resource (193 p.)
    Publisher Frontiers Media SA
    Document type Book ; Online
    Note English ; Open Access
    HBZ-ID HT020091211
    ISBN 9782889197880 ; 2889197883
    Database ZB MED Catalogue: Medicine, Health, Nutrition, Environment, Agriculture

    More links

    Kategorien

  2. Article ; Online: The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks.

    Halvagal, Manu Srinath / Zenke, Friedemann

    Nature neuroscience

    2023  Volume 26, Issue 11, Page(s) 1906–1915

    Abstract: Recognition of objects from sensory stimuli is essential for survival. To that end, sensory networks in the brain must form object representations invariant to stimulus changes, such as size, orientation and context. Although Hebbian plasticity is known ... ...

    Abstract Recognition of objects from sensory stimuli is essential for survival. To that end, sensory networks in the brain must form object representations invariant to stimulus changes, such as size, orientation and context. Although Hebbian plasticity is known to shape sensory networks, it fails to create invariant object representations in computational models, raising the question of how the brain achieves such processing. In the present study, we show that combining Hebbian plasticity with a predictive form of plasticity leads to invariant representations in deep neural network models. We derive a local learning rule that generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity. Finally, our model accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Thus, we provide a plausible normative theory emphasizing the importance of predictive plasticity mechanisms for successful representational learning.
    MeSH term(s) Animals ; Learning/physiology ; Primates ; Brain/physiology ; Neural Networks, Computer ; Neurons/physiology ; Neuronal Plasticity/physiology ; Models, Neurological
    Language English
    Publishing date 2023-10-12
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1420596-8
    ISSN 1546-1726 ; 1097-6256
    ISSN (online) 1546-1726
    ISSN 1097-6256
    DOI 10.1038/s41593-023-01460-y
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Dis-inhibitory neuronal circuits can control the sign of synaptic plasticity

    Rossbroich, Julian / Zenke, Friedemann

    2023  

    Abstract: How neuronal circuits achieve credit assignment remains a central unsolved question in systems neuroscience. Various studies have suggested plausible solutions for back-propagating error signals through multi-layer networks. These purely functionally ... ...

    Abstract How neuronal circuits achieve credit assignment remains a central unsolved question in systems neuroscience. Various studies have suggested plausible solutions for back-propagating error signals through multi-layer networks. These purely functionally motivated models assume distinct neuronal compartments to represent local error signals that determine the sign of synaptic plasticity. However, this explicit error modulation is inconsistent with phenomenological plasticity models in which the sign depends primarily on postsynaptic activity. Here we show how a plausible microcircuit model and Hebbian learning rule derived within an adaptive control theory framework can resolve this discrepancy. Assuming errors are encoded in top-down dis-inhibitory synaptic afferents, we show that error-modulated learning emerges naturally at the circuit level when recurrent inhibition explicitly influences Hebbian plasticity. The same learning rule accounts for experimentally observed plasticity in the absence of inhibition and performs comparably to back-propagation of error (BP) on several non-linearly separable benchmarks. Our findings bridge the gap between functional and experimentally observed plasticity rules and make concrete predictions on inhibitory modulation of excitatory plasticity.

    Comment: Accepted at NeurIPS 2023; fixed error in Figure S2
    Keywords Quantitative Biology - Neurons and Cognition ; Computer Science - Machine Learning ; Computer Science - Neural and Evolutionary Computing
    Subject code 501
    Publishing date 2023-10-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Nonlinear transient amplification in recurrent neural networks with short-term plasticity.

    Wu, Yue Kris / Zenke, Friedemann

    eLife

    2021  Volume 10

    Abstract: To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory ... ...

    Abstract To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, non-transient attractor states, and pathological run-away activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, short-term plasticity quenches the run-away dynamics into an inhibition-stabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and well-suited for speedy information processing. Further, we find that excitatory-inhibitory co-tuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatory-inhibitory co-tuning and short-term plasticity collaborate in recurrent networks to achieve transient amplification.
    MeSH term(s) Action Potentials ; Computer Simulation ; Humans ; Models, Neurological ; Nerve Net/physiology ; Neuronal Plasticity ; Neurons/physiology ; Synapses/physiology
    Language English
    Publishing date 2021-12-13
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2687154-3
    ISSN 2050-084X ; 2050-084X
    ISSN (online) 2050-084X
    ISSN 2050-084X
    DOI 10.7554/eLife.71263
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks.

    Zenke, Friedemann / Vogels, Tim P

    Neural computation

    2021  Volume 33, Issue 4, Page(s) 899–925

    Abstract: Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of ... ...

    Abstract Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. Yet how network connectivity relates to function is poorly understood, and the functional capabilities of models of spiking networks are still rudimentary. The lack of both theoretical insight and practical algorithms to find the necessary connectivity poses a major impediment to both studying information processing in the brain and building efficient neuromorphic hardware systems. The training algorithms that solve this problem for artificial neural networks typically rely on gradient descent. But doing so in spiking networks has remained challenging due to the nondifferentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients affect learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative's scale can substantially affect learning performance. When we combine surrogate gradients with suitable activity regularization techniques, spiking networks perform robust information processing at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.
    MeSH term(s) Algorithms ; Brain ; Learning ; Neural Networks, Computer ; Neurons
    Language English
    Publishing date 2021-01-29
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco_a_01367
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks.

    Cramer, Benjamin / Stradmann, Yannik / Schemmel, Johannes / Zenke, Friedemann

    IEEE transactions on neural networks and learning systems

    2022  Volume 33, Issue 7, Page(s) 2744–2757

    Abstract: Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to ... ...

    Abstract Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
    MeSH term(s) Brain/physiology ; Computers ; Neural Networks, Computer ; Software
    Language English
    Publishing date 2022-07-06
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2020.3044364
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Holomorphic Equilibrium Propagation Computes Exact Gradients Through Finite Size Oscillations

    Laborieux, Axel / Zenke, Friedemann

    2022  

    Abstract: Equilibrium propagation (EP) is an alternative to backpropagation (BP) that allows the training of deep neural networks with local learning rules. It thus provides a compelling framework for training neuromorphic systems and understanding learning in ... ...

    Abstract Equilibrium propagation (EP) is an alternative to backpropagation (BP) that allows the training of deep neural networks with local learning rules. It thus provides a compelling framework for training neuromorphic systems and understanding learning in neurobiology. However, EP requires infinitesimal teaching signals, thereby limiting its applicability in noisy physical systems. Moreover, the algorithm requires separate temporal phases and has not been applied to large-scale problems. Here we address these issues by extending EP to holomorphic networks. We show analytically that this extension naturally leads to exact gradients even for finite-amplitude teaching signals. Importantly, the gradient can be computed as the first Fourier coefficient from finite neuronal activity oscillations in continuous time without requiring separate phases. Further, we demonstrate in numerical simulations that our approach permits robust estimation of gradients in the presence of noise and that deeper models benefit from the finite teaching signals. Finally, we establish the first benchmark for EP on the ImageNet 32x32 dataset and show that it matches the performance of an equivalent network trained with BP. Our work provides analytical insights that enable scaling EP to large-scale problems and establishes a formal framework for how oscillations could support learning in biological and neuromorphic systems.

    Comment: 24 pages, 7 figures
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2022-09-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

    Zenke, Friedemann / Ganguli, Surya

    Neural computation

    2018  Volume 30, Issue 6, Page(s) 1514–1541

    Abstract: A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can ... ...

    Abstract A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
    Language English
    Publishing date 2018-04-13
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco_a_01086
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Book ; Online: Implicit variance regularization in non-contrastive SSL

    Halvagal, Manu Srinath / Laborieux, Axel / Zenke, Friedemann

    2022  

    Abstract: Non-contrastive SSL methods like BYOL and SimSiam rely on asymmetric predictor networks to avoid representational collapse without negative samples. Yet, how predictor networks facilitate stable learning is not fully understood. While previous ... ...

    Abstract Non-contrastive SSL methods like BYOL and SimSiam rely on asymmetric predictor networks to avoid representational collapse without negative samples. Yet, how predictor networks facilitate stable learning is not fully understood. While previous theoretical analyses assumed Euclidean losses, most practical implementations rely on cosine similarity. To gain further theoretical insight into non-contrastive SSL, we analytically study learning dynamics in conjunction with Euclidean and cosine similarity in the eigenspace of closed-form linear predictor networks. We show that both avoid collapse through implicit variance regularization albeit through different dynamical mechanisms. Moreover, we find that the eigenvalues act as effective learning rate multipliers and propose a family of isotropic loss functions (IsoLoss) that equalize convergence rates across eigenmodes. Empirically, IsoLoss speeds up the initial learning dynamics and increases robustness, thereby allowing us to dispense with the EMA target network typically used with non-contrastive methods. Our analysis sheds light on the variance regularization mechanisms of non-contrastive SSL and lays the theoretical grounds for crafting novel loss functions that shape the learning dynamics of the predictor's spectrum.

    Comment: Accepted at NeurIPS 2023
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2022-12-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Fluctuation-driven initialization for spiking neural network training

    Rossbroich, Julian / Gygax, Julia / Zenke, Friedemann

    2022  

    Abstract: Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. ...

    Abstract Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire (LIF) neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale's law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.

    Comment: 30 pages, 7 figures, plus supplementary material
    Keywords Computer Science - Neural and Evolutionary Computing ; Quantitative Biology - Neurons and Cognition
    Subject code 006
    Publishing date 2022-06-21
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top