LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 71

Search options

  1. Article: Closing the loop: High-speed robotics with accelerated neuromorphic hardware.

    Stradmann, Yannik / Schemmel, Johannes

    Frontiers in neuroscience

    2024  Volume 18, Page(s) 1360122

    Abstract: The BrainScaleS-2 system is an established analog neuromorphic platform with versatile applications in the diverse fields of computational neuroscience and spike-based machine learning. In this work, we extend the system with a configurable realtime ... ...

    Abstract The BrainScaleS-2 system is an established analog neuromorphic platform with versatile applications in the diverse fields of computational neuroscience and spike-based machine learning. In this work, we extend the system with a configurable realtime event interface that enables a tight coupling of its distinct analog network core to external sensors and actuators. The 1,000-fold acceleration of the emulated nerve cells allows us to target high-speed robotic applications that require precise timing on a microsecond scale. As a showcase, we present a closed-loop setup for commuting brushless DC motors: we utilize PyTorch to train a spiking neural network emulated on the analog substrate to control an electric motor from a sensory event stream. The presented system enables research in the area of event-driven controllers for high-speed robotics, including self-supervised and biologically inspired online learning for such applications.
    Language English
    Publishing date 2024-03-26
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2411902-7
    ISSN 1662-453X ; 1662-4548
    ISSN (online) 1662-453X
    ISSN 1662-4548
    DOI 10.3389/fnins.2024.1360122
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks.

    Cramer, Benjamin / Stradmann, Yannik / Schemmel, Johannes / Zenke, Friedemann

    IEEE transactions on neural networks and learning systems

    2022  Volume 33, Issue 7, Page(s) 2744–2757

    Abstract: Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to ... ...

    Abstract Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
    MeSH term(s) Brain/physiology ; Computers ; Neural Networks, Computer ; Software
    Language English
    Publishing date 2022-07-06
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2020.3044364
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: jaxsnn

    Müller, Eric / Althaus, Moritz / Arnold, Elias / Spilger, Philipp / Pehle, Christian / Schemmel, Johannes

    Event-driven Gradient Estimation for Analog Neuromorphic Hardware

    2024  

    Abstract: Traditional neuromorphic hardware architectures rely on event-driven computation, where the asynchronous transmission of events, such as spikes, triggers local computations within synapses and neurons. While machine learning frameworks are commonly used ... ...

    Abstract Traditional neuromorphic hardware architectures rely on event-driven computation, where the asynchronous transmission of events, such as spikes, triggers local computations within synapses and neurons. While machine learning frameworks are commonly used for gradient-based training, their emphasis on dense data structures poses challenges for processing asynchronous data such as spike trains. This problem is particularly pronounced for typical tensor data structures. In this context, we present a novel library (jaxsnn) built on top of JAX, that departs from conventional machine learning frameworks by providing flexibility in the data structures used and the handling of time, while maintaining Autograd functionality and composability. Our library facilitates the simulation of spiking neural networks and gradient estimation, with a focus on compatibility with time-continuous neuromorphic backends, such as the BrainScaleS-2 system, during the forward pass. This approach opens avenues for more efficient and flexible training of spiking neural networks, bridging the gap between traditional neuromorphic architectures and contemporary machine learning frameworks.
    Keywords Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2024-01-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Event-based Backpropagation for Analog Neuromorphic Hardware

    Pehle, Christian / Blessing, Luca / Arnold, Elias / Müller, Eric / Schemmel, Johannes

    2023  

    Abstract: Neuromorphic computing aims to incorporate lessons from studying biological nervous systems in the design of computer architectures. While existing approaches have successfully implemented aspects of those computational principles, such as sparse spike- ... ...

    Abstract Neuromorphic computing aims to incorporate lessons from studying biological nervous systems in the design of computer architectures. While existing approaches have successfully implemented aspects of those computational principles, such as sparse spike-based computation, event-based scalable learning has remained an elusive goal in large-scale systems. However, only then the potential energy-efficiency advantages of neuromorphic systems relative to other hardware architectures can be realized during learning. We present our progress implementing the EventProp algorithm using the example of the BrainScaleS-2 analog neuromorphic hardware. Previous gradient-based approaches to learning used "surrogate gradients" and dense sampling of observables or were limited by assumptions on the underlying dynamics and loss functions. In contrast, our approach only needs spike time observations from the system while being able to incorporate other system observables, such as membrane voltage measurements, in a principled way. This leads to a one-order-of-magnitude improvement in the information efficiency of the gradient estimate, which would directly translate to corresponding energy efficiency improvements in an optimized hardware implementation. We present the theoretical framework for estimating gradients and results verifying the correctness of the estimation, as well as results on a low-dimensional classification task using the BrainScaleS-2 system. Building on this work has the potential to enable scalable gradient estimation in large-scale neuromorphic hardware as a continuous measurement of the system state would be prohibitive and energy-inefficient in such instances. It also suggests the feasibility of a full on-device implementation of the algorithm that would enable scalable, energy-efficient, event-based learning in large-scale analog neuromorphic hardware.
    Keywords Quantitative Biology - Neurons and Cognition ; Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2023-02-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Simulation-based Inference for Model Parameterization on Analog Neuromorphic Hardware

    Kaiser, Jakob / Stock, Raphael / Müller, Eric / Schemmel, Johannes / Schmitt, Sebastian

    2023  

    Abstract: The BrainScaleS-2 (BSS-2) system implements physical models of neurons as well as synapses and aims for an energy-efficient and fast emulation of biological neurons. When replicating neuroscientific experiment results, a major challenge is finding ... ...

    Abstract The BrainScaleS-2 (BSS-2) system implements physical models of neurons as well as synapses and aims for an energy-efficient and fast emulation of biological neurons. When replicating neuroscientific experiment results, a major challenge is finding suitable model parameters. This study investigates the suitability of the sequential neural posterior estimation (SNPE) algorithm for parameterizing a multi-compartmental neuron model emulated on the BSS-2 analog neuromorphic hardware system. In contrast to other optimization methods such as genetic algorithms or stochastic searches, the SNPE algorithms belongs to the class of approximate Bayesian computing (ABC) methods and estimates the posterior distribution of the model parameters; access to the posterior allows classifying the confidence in parameter estimations and unveiling correlation between model parameters. In previous applications, the SNPE algorithm showed a higher computational efficiency than traditional ABC methods. For our multi-compartmental model, we show that the approximated posterior is in agreement with experimental observations and that the identified correlation between parameters is in agreement with theoretical expectations. Furthermore, we show that the algorithm can deal with high-dimensional observations and parameter spaces. These results suggest that the SNPE algorithm is a promising approach for automating the parameterization of complex models, especially when dealing with characteristic properties of analog neuromorphic substrates, such as trial-to-trial variations or limited parameter ranges.
    Keywords Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2023-03-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online ; Thesis: Accelerated neuromorphic cybernetics

    Schreiber, Korbinian [Verfasser] / Schemmel, Johannes [Akademischer Betreuer]

    2021  

    Author's details Korbinian Schreiber ; Betreuer: Johannes Schemmel
    Keywords Naturwissenschaften ; Science
    Subject code sg500
    Language English
    Publisher Universitätsbibliothek Heidelberg
    Publishing place Heidelberg
    Document type Book ; Online ; Thesis
    Database Digital theses on the web

    More links

    Kategorien

  7. Book ; Online: Towards Large-scale Network Emulation on Analog Neuromorphic Hardware

    Arnold, Elias / Spilger, Philipp / Straub, Jan V. / Müller, Eric / Dold, Dominik / Meoni, Gabriele / Schemmel, Johannes

    2024  

    Abstract: We present a novel software feature for the BrainScaleS-2 accelerated neuromorphic platform that facilitates the emulation of partitioned large-scale spiking neural networks. This approach is well suited for many deep spiking neural networks, where the ... ...

    Abstract We present a novel software feature for the BrainScaleS-2 accelerated neuromorphic platform that facilitates the emulation of partitioned large-scale spiking neural networks. This approach is well suited for many deep spiking neural networks, where the constraint of the largest recurrent subnetwork fitting on the substrate or the limited fan-in of neurons is often not a limitation in practice. We demonstrate the training of two deep spiking neural network models, using the MNIST and EuroSAT datasets, that exceed the physical size constraints of a single-chip BrainScaleS-2 system. The ability to emulate and train networks larger than the substrate provides a pathway for accurate performance evaluation in planned or scaled systems, ultimately advancing the development and understanding of large-scale models and neuromorphic computing architectures.
    Keywords Computer Science - Neural and Evolutionary Computing
    Publishing date 2024-01-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: Emulating Dendritic Computing Paradigms on Analog Neuromorphic Hardware.

    Kaiser, Jakob / Billaudelle, Sebastian / Müller, Eric / Tetzlaff, Christian / Schemmel, Johannes / Schmitt, Sebastian

    Neuroscience

    2021  Volume 489, Page(s) 290–300

    Abstract: BrainScaleS-2 is an accelerated and highly configurable neuromorphic system with physical models of neurons and synapses. Beyond networks of spiking point neurons, it allows for the implementation of user-defined neuron morphologies. Both passive ... ...

    Abstract BrainScaleS-2 is an accelerated and highly configurable neuromorphic system with physical models of neurons and synapses. Beyond networks of spiking point neurons, it allows for the implementation of user-defined neuron morphologies. Both passive propagation of electric signals between compartments as well as dendritic spikes and plateau potentials can be emulated. In this paper, three multi-compartment neuron morphologies are chosen to demonstrate passive propagation of postsynaptic potentials, spatio-temporal coincidence detection of synaptic inputs in a dendritic branch, and the replication of the BAC burst firing mechanism found in layer 5 pyramidal neurons of the neocortex.
    MeSH term(s) Action Potentials/physiology ; Dendrites/physiology ; Models, Neurological ; Neurons/physiology ; Pyramidal Cells ; Synapses
    Language English
    Publishing date 2021-08-21
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 196739-3
    ISSN 1873-7544 ; 0306-4522
    ISSN (online) 1873-7544
    ISSN 0306-4522
    DOI 10.1016/j.neuroscience.2021.08.013
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article: The BrainScaleS-2 Accelerated Neuromorphic System With Hybrid Plasticity.

    Pehle, Christian / Billaudelle, Sebastian / Cramer, Benjamin / Kaiser, Jakob / Schreiber, Korbinian / Stradmann, Yannik / Weis, Johannes / Leibfried, Aron / Müller, Eric / Schemmel, Johannes

    Frontiers in neuroscience

    2022  Volume 16, Page(s) 795876

    Abstract: Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using ... ...

    Abstract Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using novel nano-devices for computation to research into large-scale neuromorphic architectures, such as TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks-sometimes referred to as the third generation of neural networks-are the common abstraction used to model computation with such systems. Here we describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture. It combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
    Language English
    Publishing date 2022-02-24
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2411902-7
    ISSN 1662-453X ; 1662-4548
    ISSN (online) 1662-453X
    ISSN 1662-4548
    DOI 10.3389/fnins.2022.795876
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Book ; Online: hxtorch.snn

    Spilger, Philipp / Arnold, Elias / Blessing, Luca / Mauch, Christian / Pehle, Christian / Müller, Eric / Schemmel, Johannes

    Machine-learning-inspired Spiking Neural Network Modeling on BrainScaleS-2

    2022  

    Abstract: Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic ... ...

    Abstract Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
    Keywords Computer Science - Neural and Evolutionary Computing
    Subject code 006
    Publishing date 2022-12-23
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top