LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Falanti, Andrea"
  2. AU="Yujing Dang"
  3. AU="Clare Duncan"
  4. AU="Calvo Soto, Andrea Patricia"
  5. AU="Joanna I. Olszewska"
  6. AU="Francesco Cavallieri"
  7. AU="Betaieb, Ehssen"
  8. AU="Fan, Xiaoyu"
  9. AU="Riveros-Magaña, Alma Rocío"
  10. AU="Zhang, Wei-Fen"
  11. AU="Ciuca, Catrinel"
  12. AU="Friend, James R"
  13. AU="Colin R. Jackson"
  14. AU="Messina, Claudia"
  15. AU="Faircloth, Chelsey"
  16. AU="Md. Zabirul Islam" AU="Md. Zabirul Islam"
  17. AU="Butcher, Xochitl"
  18. AU="Espay, Alberto J."

Suchergebnis

Treffer 1 - 2 von insgesamt 2

Suchoptionen

  1. Buch ; Online: Enhancing Once-For-All

    Sarti, Simone / Lomurno, Eugenio / Falanti, Andrea / Matteucci, Matteo

    A Study on Parallel Blocks, Skip Connections and Early Exits

    2023  

    Abstract: The use of Neural Architecture Search (NAS) techniques to automate the design of neural networks has become increasingly popular in recent years. The proliferation of devices with different hardware characteristics using such neural networks, as well as ... ...

    Abstract The use of Neural Architecture Search (NAS) techniques to automate the design of neural networks has become increasingly popular in recent years. The proliferation of devices with different hardware characteristics using such neural networks, as well as the need to reduce the power consumption for their search, has led to the realisation of Once-For-All (OFA), an eco-friendly algorithm characterised by the ability to generate easily adaptable models through a single learning process. In order to improve this paradigm and develop high-performance yet eco-friendly NAS techniques, this paper presents OFAv2, the extension of OFA aimed at improving its performance while maintaining the same ecological advantage. The algorithm is improved from an architectural point of view by including early exits, parallel blocks and dense skip connections. The training process is extended by two new phases called Elastic Level and Elastic Height. A new Knowledge Distillation technique is presented to handle multi-output networks, and finally a new strategy for dynamic teacher network selection is proposed. These modifications allow OFAv2 to improve its accuracy performance on the Tiny ImageNet dataset by up to 12.07% compared to the original version of OFA, while maintaining the algorithm flexibility and advantages.
    Schlagwörter Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Neural and Evolutionary Computing
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2023-02-03
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: POPNASv3

    Falanti, Andrea / Lomurno, Eugenio / Ardagna, Danilo / Matteucci, Matteo

    a Pareto-Optimal Neural Architecture Search Solution for Image and Time Series Classification

    2022  

    Abstract: The automated machine learning (AutoML) field has become increasingly relevant in recent years. These algorithms can develop models without the need for expert knowledge, facilitating the application of machine learning techniques in the industry. Neural ...

    Abstract The automated machine learning (AutoML) field has become increasingly relevant in recent years. These algorithms can develop models without the need for expert knowledge, facilitating the application of machine learning techniques in the industry. Neural Architecture Search (NAS) exploits deep learning techniques to autonomously produce neural network architectures whose results rival the state-of-the-art models hand-crafted by AI experts. However, this approach requires significant computational resources and hardware investments, making it less appealing for real-usage applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new sequential model-based optimization NAS algorithm targeting different hardware environments and multiple classification tasks. Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks. The algorithm employs Pareto optimality to reduce the number of architectures sampled during the search, drastically improving the time efficiency without loss in accuracy. The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
    Schlagwörter Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Neural and Evolutionary Computing
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2022-12-13
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang