LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 138

Search options

  1. Article ; Online: Publisher Correction: Logistic and cognitive-emotional barriers experienced by first responders when alarmed to get dispatched to out-of-hospital cardiac arrest events: a region-wide survey.

    Gamberini, Lorenzo / Del Giudice, Donatella / Tartaglione, Marco / Allegri, Davide / Coniglio, Carlo / Pastori, Antonio / Gordini, Giovanni / Semeraro, Federico

    Internal and emergency medicine

    2024  Volume 19, Issue 2, Page(s) 597

    Language English
    Publishing date 2024-03-21
    Publishing country Italy
    Document type Published Erratum
    ZDB-ID 2454173-4
    ISSN 1970-9366 ; 1828-0447
    ISSN (online) 1970-9366
    ISSN 1828-0447
    DOI 10.1007/s11739-024-03548-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Learn how to Prune Pixels for Multi-view Neural Image-based Synthesis

    Milovanović, Marta / Tartaglione, Enzo / Cagnazzo, Marco / Henry, Félix

    2023  

    Abstract: ... rendering framework, compared to other pruning baselines, LeHoPP gains between $0.9$ dB and $3.6$ dB ...

    Abstract Image-based rendering techniques stand at the core of an immersive experience for the user, as they generate novel views given a set of multiple input images. Since they have shown good performance in terms of objective and subjective quality, the research community devotes great effort to their improvement. However, the large volume of data necessary to render at the receiver's side hinders applications in limited bandwidth environments or prevents their employment in real-time applications. We present LeHoPP, a method for input pixel pruning, where we examine the importance of each input pixel concerning the rendered view, and we avoid the use of irrelevant pixels. Even without retraining the image-based rendering network, our approach shows a good trade-off between synthesis quality and pixel rate. When tested in the general neural rendering framework, compared to other pruning baselines, LeHoPP gains between $0.9$ dB and $3.6$ dB on average.
    Keywords Computer Science - Multimedia ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006 ; 004
    Publishing date 2023-05-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks.

    Tartaglione, Enzo / Bragagnolo, Andrea / Fiandrotti, Attilio / Grangetto, Marco

    Neural networks : the official journal of the International Neural Network Society

    2021  Volume 146, Page(s) 230–237

    Abstract: LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology. Let the sensitivity of a network parameter be the variation of the loss function with respect to the variation of the parameter. Parameters ...

    Abstract LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology. Let the sensitivity of a network parameter be the variation of the loss function with respect to the variation of the parameter. Parameters with low sensitivity, i.e. having little impact on the loss when perturbed, are shrunk and then pruned to sparsify the network. Our method allows to train a network from scratch, i.e. without preliminary learning or rewinding. Experiments on multiple architectures and datasets show competitive compression ratios with minimal computational overhead.
    MeSH term(s) Data Compression ; Neural Networks, Computer
    Language English
    Publishing date 2021-12-02
    Publishing country United States
    Document type Journal Article
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2021.11.029
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: SeReNe: Sensitivity-Based Regularization of Neurons for Structured Sparsity in Neural Networks.

    Tartaglione, Enzo / Bragagnolo, Andrea / Odierna, Francesco / Fiandrotti, Attilio / Grangetto, Marco

    IEEE transactions on neural networks and learning systems

    2022  Volume 33, Issue 12, Page(s) 7237–7250

    Abstract: Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic. Sensitivity-based regularization of neurons (SeReNe) is a method for learning sparse topologies with a structure, ... ...

    Abstract Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic. Sensitivity-based regularization of neurons (SeReNe) is a method for learning sparse topologies with a structure, exploiting neural sensitivity as a regularizer. We define the sensitivity of a neuron as the variation of the network output with respect to the variation of the activity of the neuron. The lower the sensitivity of a neuron, the less the network output is perturbed if the neuron output changes. By including the neuron sensitivity in the cost function as a regularization term, we are able to prune neurons with low sensitivity. As entire neurons are pruned rather than single parameters, practical network footprint reduction becomes possible. Our experimental results on multiple network architectures and datasets yield competitive compression ratios with respect to state-of-the-art references.
    MeSH term(s) Neural Networks, Computer ; Data Compression/methods ; Algorithms ; Neurons
    Language English
    Publishing date 2022-11-30
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2021.3084527
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Reply to: Factors influencing prehospital physicians' decision to initiate advanced life support for asystolic out-of-hospital cardiac arrest patients: The need to define experience.

    Gamberini, Lorenzo / Scquizzato, Tommaso / Mazzoli, Carlo Alberto / Tartaglione, Marco / Semeraro, Federico

    Resuscitation

    2022  Volume 179, Page(s) 245–247

    MeSH term(s) Advanced Cardiac Life Support ; Cardiopulmonary Resuscitation ; Emergency Medical Services ; Humans ; Out-of-Hospital Cardiac Arrest/therapy ; Physicians
    Language English
    Publishing date 2022-09-29
    Publishing country Ireland
    Document type Letter ; Comment
    ZDB-ID 189901-6
    ISSN 1873-1570 ; 0300-9572
    ISSN (online) 1873-1570
    ISSN 0300-9572
    DOI 10.1016/j.resuscitation.2022.07.027
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: REM

    Renzulli, Riccardo / Tartaglione, Enzo / Grangetto, Marco

    Routing Entropy Minimization for Capsule Networks

    2022  

    Abstract: Capsule Networks ambition is to build an explainable and biologically-inspired neural network model. One of their main innovations relies on the routing mechanism which extracts a parse tree: its main purpose is to explicitly build relationships between ... ...

    Abstract Capsule Networks ambition is to build an explainable and biologically-inspired neural network model. One of their main innovations relies on the routing mechanism which extracts a parse tree: its main purpose is to explicitly build relationships between capsules. However, their true potential in terms of explainability has not surfaced yet: these relationships are extremely heterogeneous and difficult to understand. This paper proposes REM, a technique which minimizes the entropy of the parse tree-like structure, improving its explainability. We accomplish this by driving the model parameters distribution towards low entropy configurations, using a pruning mechanism as a proxy. We also generate static parse trees with no performance loss, showing that, with REM, Capsule Networks build stronger relationships between capsules.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence
    Publishing date 2022-04-04
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Disentangling private classes through regularization

    Tartaglione, Enzo / Gennari, Francesca / Grangetto, Marco

    2022  

    Abstract: Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. However, little attention has been devoted to connected legal aspects. In 2016, the European Union approved the General Data Protection Regulation which ... ...

    Abstract Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks. However, little attention has been devoted to connected legal aspects. In 2016, the European Union approved the General Data Protection Regulation which entered into force in 2018. Its main rationale was to protect the privacy and data protection of its citizens by the way of operating of the so-called "Data Economy". As data is the fuel of modern Artificial Intelligence, it is argued that the GDPR can be partly applicable to a series of algorithmic decision making tasks before a more structured AI Regulation enters into force. In the meantime, AI should not allow undesired information leakage deviating from the purpose for which is created. In this work we propose DisP, an approach for deep learning models disentangling the information related to some classes we desire to keep private, from the data processed by AI. In particular, DisP is a regularization strategy de-correlating the features belonging to the same private class at training time, hiding the information of private classes membership. Our experiments on state-of-the-art deep learning models show the effectiveness of DisP, minimizing the risk of extraction for the classes we desire to keep private.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Cryptography and Security
    Subject code 006
    Publishing date 2022-07-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: To update or not to update? Neurons at equilibrium in deep models

    Bragagnolo, Andrea / Tartaglione, Enzo / Grangetto, Marco

    2022  

    Abstract: Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from ... ...

    Abstract Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters. Such a discovery has a broad impact from theory to applications, driving the research towards methods to identify the minimum subset of parameters to train without look-ahead information exploitation. However, the methods proposed do not match the state-of-the-art performance, and rely on unstructured sparsely connected models. In this work we shift our focus from the single parameters to the behavior of the whole neuron, exploiting the concept of neuronal equilibrium (NEq). When a neuron is in a configuration at equilibrium (meaning that it has learned a specific input-output relationship), we can halt its update; on the contrary, when a neuron is at non-equilibrium, we let its state evolve towards an equilibrium state, updating its parameters. The proposed approach has been tested on different state-of-the-art learning strategies and tasks, validating NEq and observing that the neuronal equilibrium depends on the specific learning setup.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2022-07-19
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Unsupervised Learning of Unbiased Visual Representations

    Barbano, Carlo Alberto / Tartaglione, Enzo / Grangetto, Marco

    2022  

    Abstract: ... We propose a fully unsupervised debiasing framework, consisting of three steps: first, we exploit the natural ... to obtain an unbiased model. We also propose a theoretical framework to assess the biasness of a model, and ...

    Abstract Deep neural networks are known for their inability to learn robust representations when biases exist in the dataset. This results in a poor generalization to unbiased datasets, as the predictions strongly rely on peripheral and confounding factors, which are erroneously learned by the network. Many existing works deal with this issue by either employing an explicit supervision on the bias attributes, or assuming prior knowledge about the bias. In this work we study this problem in a more difficult scenario, in which no explicit annotation about the bias is available, and without any prior knowledge about its nature. We propose a fully unsupervised debiasing framework, consisting of three steps: first, we exploit the natural preference for learning malignant biases, obtaining a bias-capturing model; then, we perform a pseudo-labelling step to obtain bias labels; finally we employ state-of-the-art supervised debiasing techniques to obtain an unbiased model. We also propose a theoretical framework to assess the biasness of a model, and provide a detailed analysis on how biases affect the training of neural networks. We perform experiments on synthetic and real-world datasets, showing that our method achieves state-of-the-art performance in a variety of settings, sometimes even higher than fully supervised debiasing approaches.

    Comment: 14 pages, 8 figures
    Keywords Computer Science - Machine Learning ; Computer Science - Computer Vision and Pattern Recognition ; 68T07
    Subject code 006
    Publishing date 2022-04-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: DSD$^2$

    Quétu, Victor / Tartaglione, Enzo

    Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?

    2023  

    Abstract: ... First, we propose a learning framework that avoids such a phenomenon and improves generalization. Second ...

    Abstract Neoteric works have shown that modern deep learning models can exhibit a sparse double descent phenomenon. Indeed, as the sparsity of the model increases, the test performance first worsens since the model is overfitting the training data; then, the overfitting reduces, leading to an improvement in performance, and finally, the model begins to forget critical information, resulting in underfitting. Such a behavior prevents using traditional early stop criteria. In this work, we have three key contributions. First, we propose a learning framework that avoids such a phenomenon and improves generalization. Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria. Third, we provide a comprehensive quantitative analysis of contingent factors such as re-initialization methods, model width and depth, and dataset noise. The contributions are supported by empirical evidence in typical setups. Our code is available at https://github.com/VGCQ/DSD2.
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-03-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top