LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 37

Search options

  1. Book ; Online: Testing GLOM's ability to infer wholes from ambiguous parts

    Culp, Laura / Sabour, Sara / Hinton, Geoffrey E.

    2022  

    Abstract: The GLOM architecture proposed by Hinton [2021] is a recurrent neural network for parsing an image ...

    Abstract The GLOM architecture proposed by Hinton [2021] is a recurrent neural network for parsing an image into a hierarchy of wholes and parts. When a part is ambiguous, GLOM assumes that the ambiguity can be resolved by allowing the part to make multi-modal predictions for the pose and identity of the whole to which it belongs and then using attention to similar predictions coming from other possibly ambiguous parts to settle on a common mode that is predicted by several different parts. In this study, we describe a highly simplified version of GLOM that allows us to assess the effectiveness of this way of dealing with ambiguity. Our results show that, with supervised training, GLOM is able to successfully form islands of very similar embedding vectors for all of the locations occupied by the same object and it is also robust to strong noise injections in the input and to out-of-distribution input transformations.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-11-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: The Forward-Forward Algorithm

    Hinton, Geoffrey

    Some Preliminary Investigations

    2022  

    Abstract: ... with positive (i.e. real) data and the other with negative data which could be generated by the network ...

    Abstract The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes could be separated in time, the negative passes could be done offline, which would make the learning much simpler in the positive pass and allow video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-12-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: Machine learning for neuroscience.

    Hinton, Geoffrey E

    Neural systems & circuits

    2011  Volume 1, Issue 1, Page(s) 12

    Language English
    Publishing date 2011-08-15
    Publishing country England
    Document type Journal Article
    ZDB-ID 2595516-0
    ISSN 2042-1001 ; 2042-1001
    ISSN (online) 2042-1001
    ISSN 2042-1001
    DOI 10.1186/2042-1001-1-12
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Adaptive Mixtures of Local Experts.

    Jacobs, Robert A / Jordan, Michael I / Nowlan, Steven J / Hinton, Geoffrey E

    Neural computation

    2019  Volume 3, Issue 1, Page(s) 79–87

    Abstract: We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer ... ...

    Abstract We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.
    Language English
    Publishing date 2019-05-29
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco.1991.3.1.79
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Learning to represent visual input.

    Hinton, Geoffrey E

    Philosophical transactions of the Royal Society of London. Series B, Biological sciences

    2009  Volume 365, Issue 1537, Page(s) 177–184

    Abstract: One of the central problems in computational neuroscience is to understand how the object-recognition pathway of the cortex learns a deep hierarchy of nonlinear feature detectors. Recent progress in machine learning shows that it is possible to learn ... ...

    Abstract One of the central problems in computational neuroscience is to understand how the object-recognition pathway of the cortex learns a deep hierarchy of nonlinear feature detectors. Recent progress in machine learning shows that it is possible to learn deep hierarchies without requiring any labelled data. The feature detectors are learned one layer at a time and the goal of the learning procedure is to form a good generative model of images, not to predict the class of each image. The learning procedure only requires the pairwise correlations between the activations of neuron-like processing units in adjacent layers. The original version of the learning procedure is derived from a quadratic 'energy' function but it can be extended to allow third-order, multiplicative interactions in which neurons gate the pairwise interactions between other neurons. A technique for factoring the third-order interactions leads to a learning module that again has a simple learning rule based on pairwise correlations. This module looks remarkably like modules that have been proposed by both biologists trying to explain the responses of neurons and engineers trying to create systems that can recognize objects.
    MeSH term(s) Computer Simulation ; Humans ; Learning/physiology ; Models, Neurological ; Neural Networks, Computer ; Visual Pathways/physiology
    Language English
    Publishing date 2009-12-10
    Publishing country England
    Document type Journal Article ; Review
    ZDB-ID 208382-6
    ISSN 1471-2970 ; 0080-4622 ; 0264-3839 ; 0962-8436
    ISSN (online) 1471-2970
    ISSN 0080-4622 ; 0264-3839 ; 0962-8436
    DOI 10.1098/rstb.2009.0200
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article: Learning multiple layers of representation.

    Hinton, Geoffrey E

    Trends in cognitive sciences

    2007  Volume 11, Issue 10, Page(s) 428–434

    Abstract: To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural ... ...

    Abstract To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input. Backpropagation was the first computationally efficient model of how neural networks could learn multiple layers of representation, but it required labeled training data and it did not work well in deep networks. The limitations of backpropagation learning can now be overcome by using multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it. Learning multilayer generative models might seem difficult, but a recent discovery makes it easy to learn nonlinear distributed representations one layer at a time.
    MeSH term(s) Brain/physiology ; Humans ; Learning/physiology ; Models, Psychological ; Nerve Net/physiology
    Language English
    Publishing date 2007-10
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Review
    ZDB-ID 2010989-1
    ISSN 1879-307X ; 1364-6613
    ISSN (online) 1879-307X
    ISSN 1364-6613
    DOI 10.1016/j.tics.2007.09.004
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article: To recognize shapes, first learn to generate images.

    Hinton, Geoffrey E

    Progress in brain research

    2007  Volume 165, Page(s) 535–547

    Abstract: The uniformity of the cortical architecture and the ability of functions to move to different areas of cortex following early damage strongly suggest that there is a single basic learning algorithm for extracting underlying structure from richly ... ...

    Abstract The uniformity of the cortical architecture and the ability of functions to move to different areas of cortex following early damage strongly suggest that there is a single basic learning algorithm for extracting underlying structure from richly structured, high-dimensional sensory data. There have been many attempts to design such an algorithm, but until recently they all suffered from serious computational weaknesses. This chapter describes several of the proposed algorithms and shows how they can be combined to produce hybrid methods that work efficiently in networks with many layers and millions of adaptive connections.
    MeSH term(s) Algorithms ; Cerebral Cortex/cytology ; Cerebral Cortex/physiology ; Humans ; Learning/physiology ; Models, Neurological ; Neural Networks (Computer) ; Pattern Recognition, Automated
    Language English
    Publishing date 2007
    Publishing country Netherlands
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Review
    ISSN 0079-6123
    ISSN 0079-6123
    DOI 10.1016/S0079-6123(06)65034-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Book ; Online: Stacked Capsule Autoencoders

    Kosiorek, Adam R. / Sabour, Sara / Teh, Yee Whye / Hinton, Geoffrey E.

    2019  

    Abstract: Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the ... ...

    Abstract Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the viewpoint, our model is robust to viewpoint changes. SCAE consists of two stages. In the first stage, the model predicts presences and poses of part templates directly from the image and tries to reconstruct the image by appropriately arranging the templates. In the second stage, SCAE predicts parameters of a few object capsules, which are then used to reconstruct part poses. Inference in this model is amortized and performed by off-the-shelf neural encoders, unlike in previous capsule networks. We find that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for unsupervised classification on SVHN (55%) and MNIST (98.7%). The code is available at https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders

    Comment: NeurIPS 2019; 14 pages, 7 figures, 4 tables, code is available at https://github.com/google-research/google-research/tree/master/stacked_capsule_autoencoders
    Keywords Statistics - Machine Learning ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning ; Computer Science - Neural and Evolutionary Computing
    Subject code 005 ; 004
    Publishing date 2019-06-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Unsupervised part representation by Flow Capsules

    Sabour, Sara / Tagliasacchi, Andrea / Yazdani, Soroosh / Hinton, Geoffrey E. / Fleet, David J.

    2020  

    Abstract: Capsule networks are designed to parse an image into a hierarchy of objects, parts and relations. While promising, they remain limited by an inability to learn effective low level part descriptions. To address this issue we propose a novel self- ... ...

    Abstract Capsule networks are designed to parse an image into a hierarchy of objects, parts and relations. While promising, they remain limited by an inability to learn effective low level part descriptions. To address this issue we propose a novel self-supervised method for learning part descriptors of an image. During training, we exploit motion as a powerful perceptual cue for part definition, using an expressive decoder for part generation and layered image formation with occlusion. Experiments demonstrate robust part discovery in the presence of multiple objects, cluttered backgrounds, and significant occlusion. The resulting part descriptors, a.k.a. part capsules, are decoded into shape masks, filling in occluded pixels, along with relative depth on single images. We also report unsupervised object classification using our capsule parts in a stacked capsule autoencoder.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Publishing date 2020-11-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Article ; Online: Learning to represent spatial transformations with factored higher-order Boltzmann machines.

    Memisevic, Roland / Hinton, Geoffrey E

    Neural computation

    2010  Volume 22, Issue 6, Page(s) 1473–1492

    Abstract: ... successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use ...

    Abstract To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
    MeSH term(s) Algorithms ; Artificial Intelligence ; Image Processing, Computer-Assisted/methods ; Mathematical Concepts ; Neural Networks (Computer) ; Pattern Recognition, Automated/methods ; Pattern Recognition, Visual/physiology ; Space Perception/physiology
    Language English
    Publishing date 2010-06
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco.2010.01-09-953
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top