LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Sanguineti, M."
  2. AU="Cowie, Bruce"
  3. AU="Vílchez-Acosta, Alba"
  4. AU="Schierbaum, Luca"
  5. AU="Manea, M"
  6. AU=Slimano Florian
  7. AU="Awais, M."
  8. AU="Youn, Jong-Ung"
  9. AU="Song, Min-Gyu"
  10. AU="Sawada, Takashi"
  11. AU="Ferrucci, Francesco"
  12. AU="Agrawal, Karan"

Suchergebnis

Treffer 1 - 10 von insgesamt 122

Suchoptionen

  1. Artikel ; Online: Approximation of classifiers by deep perceptron networks.

    Kůrková, Věra / Sanguineti, Marcello

    Neural networks : the official journal of the International Neural Network Society

    2023  Band 165, Seite(n) 654–661

    Abstract: We employ properties of high-dimensional geometry to obtain some insights into capabilities of deep perceptron networks to classify large data sets. We derive conditions on network depths, types of activation functions, and numbers of parameters that ... ...

    Abstract We employ properties of high-dimensional geometry to obtain some insights into capabilities of deep perceptron networks to classify large data sets. We derive conditions on network depths, types of activation functions, and numbers of parameters that imply that approximation errors behave almost deterministically. We illustrate general results by concrete cases of popular activation functions: Heaviside, ramp sigmoid, rectified linear, and rectified power. Our probabilistic bounds on approximation errors are derived using concentration of measure type inequalities (method of bounded differences) and concepts from statistical learning theory.
    Mesh-Begriff(e) Neural Networks, Computer
    Sprache Englisch
    Erscheinungsdatum 2023-06-07
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2023.06.004
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Artikel ; Online: Classification by Sparse Neural Networks.

    Kurkova, Vera / Sanguineti, Marcello

    IEEE transactions on neural networks and learning systems

    2019  Band 30, Heft 9, Seite(n) 2746–2754

    Abstract: The choice of dictionaries of computational units suitable for efficient computation of binary classification tasks is investigated. To deal with exponentially growing sets of tasks with increasingly large domains, a probabilistic model is introduced. ... ...

    Abstract The choice of dictionaries of computational units suitable for efficient computation of binary classification tasks is investigated. To deal with exponentially growing sets of tasks with increasingly large domains, a probabilistic model is introduced. The relevance of tasks for a given application area is modeled by a product probability distribution on the set of all binary-valued functions. Approximate measures of network sparsity are studied in terms of variational norms tailored to dictionaries of computational units. Bounds on these norms are proven using the Chernoff-Hoeffding bound on sums of independent random variables that need not be identically distributed. Consequences of the probabilistic results for the choice of dictionaries of computational units are derived. It is shown that when a priori knowledge of a type of classification tasks is limited, then the sparsity may be achieved only at the expense of large sizes of dictionaries.
    Sprache Englisch
    Erscheinungsdatum 2019-01-10
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2018.2888517
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Artikel ; Online: Probabilistic lower bounds for approximation by shallow perceptron networks.

    Kůrková, Věra / Sanguineti, Marcello

    Neural networks : the official journal of the International Neural Network Society

    2017  Band 91, Seite(n) 34–41

    Abstract: Limitations of approximation capabilities of shallow perceptron networks are investigated. Lower bounds on approximation errors are derived for binary-valued functions on finite domains. It is proven that unless the number of network units is ... ...

    Abstract Limitations of approximation capabilities of shallow perceptron networks are investigated. Lower bounds on approximation errors are derived for binary-valued functions on finite domains. It is proven that unless the number of network units is sufficiently large (larger than any polynomial of the logarithm of the size of the domain) a good approximation cannot be achieved for almost any uniformly randomly chosen function on a given domain. The results are obtained by combining probabilistic Chernoff-Hoeffding bounds with estimates of the sizes of sets of functions exactly computable by shallow networks with increasing numbers of units.
    Sprache Englisch
    Erscheinungsdatum 2017-07
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2017.04.003
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: An efficient combined local and global search strategy for optimization of parallel kinematic mechanisms with joint limits and collision constraints

    Durgesh, Haribhau / Michel, Guillaume / Kumar, Shivesh / Sanguineti, Marcello / Chablat, Damien

    2022  

    Abstract: The optimization of parallel kinematic manipulators (PKM) involve several constraints that are difficult to formalize, thus making optimal synthesis problem highly challenging. The presence of passive joint limits as well as the singularities and self- ... ...

    Abstract The optimization of parallel kinematic manipulators (PKM) involve several constraints that are difficult to formalize, thus making optimal synthesis problem highly challenging. The presence of passive joint limits as well as the singularities and self-collisions lead to a complicated relation between the input and output parameters. In this article, a novel optimization methodology is proposed by combining a local search, Nelder-Mead algorithm, with global search methodologies such as low discrepancy distribution for faster and more efficient exploration of the optimization space. The effect of the dimension of the optimization problem and the different constraints are discussed to highlight the complexities of closed-loop kinematic chain optimization. The work also presents the approaches used to consider constraints for passive joint boundaries as well as singularities to avoid internal collisions in such mechanisms. The proposed algorithm can also optimize the length of the prismatic actuators and the constraints can be added in modular fashion, allowing to understand the impact of given criteria on the final result. The application of the presented approach is used to optimize two PKMs of different degrees of freedom.
    Schlagwörter Computer Science - Robotics
    Thema/Rubrik (Code) 629
    Erscheinungsdatum 2022-02-24
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Artikel ; Online: LQG Online Learning.

    Gnecco, Giorgio / Bemporad, Alberto / Gori, Marco / Sanguineti, Marcello

    Neural computation

    2017  Band 29, Heft 8, Seite(n) 2203–2291

    Abstract: Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical ... ...

    Abstract Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
    Sprache Englisch
    Erscheinungsdatum 2017-05-31
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/NECO_a_00976
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  6. Buch ; Online: Towards automatic detection and classification of orca (Orcinus orca) calls using cross-correlation methods

    Palmero, Stefano / Guidi, Carlo / Kulikovskiy, Vladimir / Sanguineti, Matteo / Manghi, Michele / Sommer, Matteo / Pesce, Gaia

    2021  

    Abstract: Orca (Orcinus orca) is known for complex vocalisation. Their social structure consists of pods and clans sharing unique dialects due to geographic isolation. Sound type repertoires are fundamental for monitoring orca populations and are typically created ...

    Abstract Orca (Orcinus orca) is known for complex vocalisation. Their social structure consists of pods and clans sharing unique dialects due to geographic isolation. Sound type repertoires are fundamental for monitoring orca populations and are typically created visually and aurally. An orca pod occurring in the Ligurian Sea (Pelagos Sanctuary) in December 2019 provided a unique occasion for long-term recordings. The numerous data collected with the bottom recorder were analysed with a traditional human-driven inspection to create a repertoire of this pod and to compare it to catalogues from different orca populations (Icelandic and Antarctic) investigating its origins. Automatic signal detection and cross-correlation methods (R package warbleR) were used for the first time in orca studies. We found the Pearson cross-correlation method to be efficient for most pairwise calculations (> 85%) but with false positives. One sound type from our repertoire presented a high positive match (range 0.62-0.67) with one from the Icelandic catalogue, which was confirmed visually and aurally. Our first attempt to automatically classify orca sound types presented limitations due to background noise and sound complexity of orca communication. We show cross-correlation methods can be a powerful tool for sound type classification in combination with conventional methods.

    Comment: 26 pages, 6 figures
    Schlagwörter Quantitative Biology - Quantitative Methods ; Computer Science - Sound ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Thema/Rubrik (Code) 600
    Erscheinungsdatum 2021-10-29
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  7. Artikel ; Online: Learning With Mixed Hard/Soft Pointwise Constraints.

    Gnecco, Giorgio / Gori, Marco / Melacci, Stefano / Sanguineti, Marcello

    IEEE transactions on neural networks and learning systems

    2015  Band 26, Heft 9, Seite(n) 2019–2032

    Abstract: A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. ... ...

    Abstract A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
    Sprache Englisch
    Erscheinungsdatum 2015-09
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2014.2361866
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: Foundations of support constraint machines.

    Gnecco, Giorgio / Gori, Marco / Melacci, Stefano / Sanguineti, Marcello

    Neural computation

    2015  Band 27, Heft 2, Seite(n) 388–480

    Abstract: The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. ...

    Abstract The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
    Sprache Englisch
    Erscheinungsdatum 2015-02
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/NECO_a_00686
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  9. Artikel: Considerations on 131I-metaiodobenzylguanidine therapy of six children with neuroblastoma.

    Sanguineti, M

    Medical and pediatric oncology

    1987  Band 15, Heft 4, Seite(n) 212–215

    Abstract: Six children affected by neuroblastoma at stages III and IV were treated with high-specific-activity 131I-metaiodobenzylguanidine (MIBG). After 131I-MIBG treatment three patients died at 12, 10, and 12 weeks, respectively; the other three were still ... ...

    Abstract Six children affected by neuroblastoma at stages III and IV were treated with high-specific-activity 131I-metaiodobenzylguanidine (MIBG). After 131I-MIBG treatment three patients died at 12, 10, and 12 weeks, respectively; the other three were still living at 21, 16, and 24 weeks, respectively. Although the assumptions for this therapy were propitious, the results obtained do not correspond to those expected. It is supposed that large tumor volume and previous chemotherapy and/or radiotherapy may impair the effectiveness of 131I-MIBG therapy. Consequently, 131I-MIBG therapy is recommended even if the spread of disease is not proved-only, however, when the tumor is small.
    Mesh-Begriff(e) 3-Iodobenzylguanidine ; Abdominal Neoplasms/radiotherapy ; Child ; Child, Preschool ; Female ; Humans ; Iodine Radioisotopes/therapeutic use ; Iodobenzenes/therapeutic use ; Male ; Neuroblastoma/radiotherapy
    Chemische Substanzen Iodine Radioisotopes ; Iodobenzenes ; 3-Iodobenzylguanidine (35MRW7B4AD)
    Sprache Englisch
    Erscheinungsdatum 1987
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 191189-2
    ISSN 1096-911X ; 0098-1532
    ISSN (online) 1096-911X
    ISSN 0098-1532
    DOI 10.1002/mpo.2950150415
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  10. Artikel ; Online: Regularization techniques and suboptimal solutions to optimization problems in learning from data.

    Gnecco, Giorgio / Sanguineti, Marcello

    Neural computation

    2010  Band 22, Heft 3, Seite(n) 793–829

    Abstract: Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are ... ...

    Abstract Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.
    Mesh-Begriff(e) Algorithms ; Artificial Intelligence ; Linear Models ; Time Factors
    Sprache Englisch
    Erscheinungsdatum 2010-03
    Erscheinungsland United States
    Dokumenttyp Comparative Study ; Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1025692-1
    ISSN 1530-888X ; 0899-7667
    ISSN (online) 1530-888X
    ISSN 0899-7667
    DOI 10.1162/neco.2009.05-08-786
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang