LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 273

Search options

  1. Article ; Online: For love of neuroscience: The Neuromatch movement.

    Kording, Konrad Paul

    Neuron

    2021  Volume 109, Issue 19, Page(s) 3034–3035

    Abstract: In this meeting report, I applaud the Neuromatch community, which runs virtual summer schools and conferences in response to the pandemic. Its members love science, aim to advance our understanding of the brain, and work extremely hard to include ... ...

    Abstract In this meeting report, I applaud the Neuromatch community, which runs virtual summer schools and conferences in response to the pandemic. Its members love science, aim to advance our understanding of the brain, and work extremely hard to include everyone.
    MeSH term(s) COVID-19 ; Neurosciences/education ; Neurosciences/trends ; Pandemics ; Teaching ; Videoconferencing
    Language English
    Publishing date 2021-09-23
    Publishing country United States
    Document type Congress
    ZDB-ID 808167-0
    ISSN 1097-4199 ; 0896-6273
    ISSN (online) 1097-4199
    ISSN 0896-6273
    DOI 10.1016/j.neuron.2021.07.021
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: A role for cortical interneurons as adversarial discriminators.

    Benjamin, Ari S / Kording, Konrad P

    PLoS computational biology

    2023  Volume 19, Issue 9, Page(s) e1011484

    Abstract: The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that ... ...

    Abstract The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.
    MeSH term(s) Learning/physiology ; Interneurons ; Brain ; Algorithms ; Sleep
    Language English
    Publishing date 2023-09-28
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2193340-6
    ISSN 1553-7358 ; 1553-734X
    ISSN (online) 1553-7358
    ISSN 1553-734X
    DOI 10.1371/journal.pcbi.1011484
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Neural spiking for causal inference and learning.

    Lansdell, Benjamin James / Kording, Konrad Paul

    PLoS computational biology

    2023  Volume 19, Issue 4, Page(s) e1011005

    Abstract: When a neuron is driven beyond its threshold, it spikes. The fact that it does not communicate its continuous membrane potential is usually seen as a computational liability. Here we show that this spiking mechanism allows neurons to produce an unbiased ... ...

    Abstract When a neuron is driven beyond its threshold, it spikes. The fact that it does not communicate its continuous membrane potential is usually seen as a computational liability. Here we show that this spiking mechanism allows neurons to produce an unbiased estimate of their causal influence, and a way of approximating gradient descent-based learning. Importantly, neither activity of upstream neurons, which act as confounders, nor downstream non-linearities bias the results. We show how spiking enables neurons to solve causal estimation problems and that local plasticity can approximate gradient descent using spike discontinuity learning.
    MeSH term(s) Learning/physiology ; Neurons/physiology ; Membrane Potentials/physiology ; Action Potentials/physiology ; Models, Neurological
    Language English
    Publishing date 2023-04-04
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2193340-6
    ISSN 1553-7358 ; 1553-734X
    ISSN (online) 1553-7358
    ISSN 1553-734X
    DOI 10.1371/journal.pcbi.1011005
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Overfitting to 'predict' suicidal ideation.

    Verstynen, Timothy / Kording, Konrad Paul

    Nature human behaviour

    2023  Volume 7, Issue 5, Page(s) 680–681

    MeSH term(s) Humans ; Suicidal Ideation ; Suicide ; Risk Factors
    Language English
    Publishing date 2023-04-06
    Publishing country England
    Document type Letter ; Comment
    ISSN 2397-3374
    ISSN (online) 2397-3374
    DOI 10.1038/s41562-023-01560-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Why the simplest explanation isn't always the best.

    Dyer, Eva L / Kording, Konrad

    Proceedings of the National Academy of Sciences of the United States of America

    2023  Volume 120, Issue 52, Page(s) e2319169120

    Language English
    Publishing date 2023-12-20
    Publishing country United States
    Document type Journal Article ; Comment
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2319169120
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: The study of plasticity has always been about gradients.

    Richards, Blake Aaron / Kording, Konrad Paul

    The Journal of physiology

    2023  Volume 601, Issue 15, Page(s) 3141–3149

    Abstract: The experimental study of learning and plasticity has always been driven by an implicit question: how can physiological changes be adaptive and improve performance? For example, in Hebbian plasticity only synapses from presynaptic neurons that were ... ...

    Abstract The experimental study of learning and plasticity has always been driven by an implicit question: how can physiological changes be adaptive and improve performance? For example, in Hebbian plasticity only synapses from presynaptic neurons that were active are changed, avoiding useless changes. Similarly, in dopamine-gated learning synapse changes depend on reward or lack thereof and do not change when everything is predictable. Within machine learning we can make the question of which changes are adaptive concrete: performance improves when changes correlate with the gradient of an objective function quantifying performance. This result is general for any system that improves through small changes. As such, physiology has always implicitly been seeking mechanisms that allow the brain to approximate gradients. Coming from this perspective we review the existing literature on plasticity-related mechanisms, and we show how these mechanisms relate to gradient estimation. We argue that gradients are a unifying idea to explain the many facets of neuronal plasticity.
    MeSH term(s) Neuronal Plasticity/physiology ; Neurons/physiology ; Dopamine ; Synapses/physiology ; Brain
    Chemical Substances Dopamine (VTD58H1Z2X)
    Language English
    Publishing date 2023-05-01
    Publishing country England
    Document type Review ; Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 3115-x
    ISSN 1469-7793 ; 0022-3751
    ISSN (online) 1469-7793
    ISSN 0022-3751
    DOI 10.1113/JP282747
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book: Sensory cue integration

    Trommershäuser, Julia / Körding, Konrad P. / Landy, Michael S.

    (Computational neuroscience)

    2011  

    Author's details ed. by Julia Trommershäuser ; Konrad P. Körding ; Michael S. Landy
    Series title Computational neuroscience
    Keywords Sensation / physiology ; Cues ; Neural Networks (Computer) ; Computer Simulation ; Models, Neurological ; Wahrnehmung ; Physiologie ; Computersimulation ; Reiz ; Sinne
    Subject Die fünf Sinne ; Sinn ; Stimulus ; Reizung ; Perzeption ; Sensorischer Prozess ; Sinnesmodalität ; Sinneswahrnehmung ; Sinnliche Wahrnehmung ; Aisthesis ; Aisthetik ; Wahrnehmungsprozess ; Sensation ; Simulation ; Computer ; Simulationstechnik ; Systemsimulation ; Digitale Simulation ; Computermodell ; Rechnersimulation ; Humanphysiologie ; Mensch ; Körperfunktion
    Language English
    Size XIII, 446 S. : Ill., graph. Darst.
    Publisher Oxford Univ. Press
    Publishing place Oxford u.a.
    Publishing country Great Britain
    Document type Book
    HBZ-ID HT017029253
    ISBN 978-0-19-538724-7 ; 0-19-538724-4
    Database Catalogue ZB MED Medicine, Health

    More links

    Kategorien

  8. Article ; Online: Might a Single Neuron Solve Interesting Machine Learning Problems Through Successive Computations on Its Dendritic Tree?

    Jones, Ilenna Simone / Kording, Konrad Paul

    Neural computation

    2021  Volume 33, Issue 6, Page(s) 1554–1571

    Abstract: Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how aspects of a dendritic tree, such as its branched morphology or its repetition of ... ...

    Abstract Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how aspects of a dendritic tree, such as its branched morphology or its repetition of presynaptic inputs, determine neural computation beyond this apparent nonlinearity. Here we use a simple model where the dendrite is implemented as a sequence of thresholded linear units. We manipulate the architecture of this model to investigate the impacts of binary branching constraints and repetition of synaptic inputs on neural computation. We find that models with such manipulations can perform well on machine learning tasks, such as Fashion MNIST or Extended MNIST. We find that model performance on these tasks is limited by binary tree branching and dendritic asymmetry and is improved by the repetition of synaptic inputs to different dendritic branches. These computational experiments further neuroscience theory on how different dendritic properties might determine neural computation of clearly defined tasks.
    MeSH term(s) Dendrites ; Machine Learning ; Models, Neurological ; Neurons ; Synapses
    Language English
    Publishing date 2021-09-08
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco_a_01390
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Do Biological Constraints Impair Dendritic Computation?

    Jones, Ilenna Simone / Kording, Konrad Paul

    Neuroscience

    2021  Volume 489, Page(s) 262–274

    Abstract: Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow ... ...

    Abstract Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks.
    MeSH term(s) Action Potentials/physiology ; Dendrites/physiology ; Models, Neurological ; Neural Networks, Computer ; Neurons/physiology ; Synapses/physiology
    Language English
    Publishing date 2021-08-06
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ZDB-ID 196739-3
    ISSN 1873-7544 ; 0306-4522
    ISSN (online) 1873-7544
    ISSN 0306-4522
    DOI 10.1016/j.neuroscience.2021.07.036
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Downstream network transformations dissociate neural activity from causal functional contributions.

    Fakhar, Kayson / Dixit, Shrey / Hadaeghi, Fatemeh / Kording, Konrad P / Hilgetag, Claus C

    Scientific reports

    2024  Volume 14, Issue 1, Page(s) 2103

    Abstract: Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to ...

    Abstract Neuroscientists rely on distributed spatio-temporal patterns of neural activity to understand how neural units contribute to cognitive functions and behavior. However, the extent to which neural activity reliably indicates a unit's causal contribution to the behavior is not well understood. To address this issue, we provide a systematic multi-site perturbation framework that captures time-varying causal contributions of elements to a collectively produced outcome. Applying our framework to intuitive toy examples and artificial neural networks revealed that recorded activity patterns of neural elements may not be generally informative of their causal contribution due to activity transformations within a network. Overall, our findings emphasize the limitations of inferring causal mechanisms from neural activities and offer a rigorous lesioning framework for elucidating causal neural contributions.
    MeSH term(s) Cognition ; Neurons ; Causality ; Intuition ; Neural Networks, Computer
    Language English
    Publishing date 2024-01-24
    Publishing country England
    Document type Journal Article
    ZDB-ID 2615211-3
    ISSN 2045-2322 ; 2045-2322
    ISSN (online) 2045-2322
    ISSN 2045-2322
    DOI 10.1038/s41598-024-52423-7
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top