LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Platanios, Emmanouil Antonios"
  2. AU="Havva Keskin"
  3. AU="Gomis, Susantha"
  4. AU="Castro, Vanda"
  5. AU="Josiah, S M"
  6. AU="Yanjun Guo"
  7. AU="Klapp, Sabine H L"
  8. AU="Cipolat, Lauriane"
  9. AU="Rhee, Hwanseok"
  10. AU="El-Khatabi, K"
  11. AU="Lee, Seung Hee"
  12. AU=Torres Antoni
  13. AU="Baldacini, Mathieu"
  14. AU="Stahl, Alexander"
  15. AU="Karimbumkara, Seena Narayanan"
  16. AU="Welz Mirosław"
  17. AU="Jintao Ding"
  18. AU="Mei-Fang Chen"

Suchergebnis

Treffer 1 - 4 von insgesamt 4

Suchoptionen

  1. Buch ; Online: Improving Relation Extraction by Leveraging Knowledge Graph Link Prediction

    Stoica, George / Platanios, Emmanouil Antonios / Póczos, Barnabás

    2020  

    Abstract: Relation extraction (RE) aims to predict a relation between a subject and an object in a sentence, while knowledge graph link prediction (KGLP) aims to predict a set of objects, O, given a subject and a relation from a knowledge graph. These two problems ...

    Abstract Relation extraction (RE) aims to predict a relation between a subject and an object in a sentence, while knowledge graph link prediction (KGLP) aims to predict a set of objects, O, given a subject and a relation from a knowledge graph. These two problems are closely related as their respective objectives are intertwined: given a sentence containing a subject and an object o, a RE model predicts a relation that can then be used by a KGLP model together with the subject, to predict a set of objects O. Thus, we expect object o to be in set O. In this paper, we leverage this insight by proposing a multi-task learning approach that improves the performance of RE models by jointly training on RE and KGLP tasks. We illustrate the generality of our approach by applying it on several existing RE models and empirically demonstrate how it helps them achieve consistent performance gains.
    Schlagwörter Computer Science - Computation and Language
    Erscheinungsdatum 2020-12-08
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: Learning from Imperfect Annotations

    Platanios, Emmanouil Antonios / Al-Shedivat, Maruan / Xing, Eric / Mitchell, Tom

    2020  

    Abstract: Many machine learning systems today are trained on large amounts of human-annotated data. Data annotation tasks that require a high level of competency make data acquisition expensive, while the resulting labels are often subjective, inconsistent, and ... ...

    Abstract Many machine learning systems today are trained on large amounts of human-annotated data. Data annotation tasks that require a high level of competency make data acquisition expensive, while the resulting labels are often subjective, inconsistent, and may contain a variety of human biases. To improve the data quality, practitioners often need to collect multiple annotations per example and aggregate them before training models. Such a multi-stage approach results in redundant annotations and may often produce imperfect "ground truth" that may limit the potential of training accurate machine learning models. We propose a new end-to-end framework that enables us to: (i) merge the aggregation step with model training, thus allowing deep learning systems to learn to predict ground truth estimates directly from the available data, and (ii) model difficulties of examples and learn representations of the annotators that allow us to estimate and take into account their competencies. Our approach is general and has many applications, including training more accurate models on crowdsourced data, ensemble learning, as well as classifier accuracy estimation from unlabeled data. We conduct an extensive experimental evaluation of our method on 5 crowdsourcing datasets of varied difficulty and show accuracy gains of up to 25% over the current state-of-the-art approaches for aggregating annotations, as well as significant reductions in the required annotation redundancy.
    Schlagwörter Computer Science - Machine Learning ; Statistics - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2020-04-07
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  3. Buch ; Online: HyperDynamics

    Xian, Zhou / Lal, Shamit / Tung, Hsiao-Yu / Platanios, Emmanouil Antonios / Fragkiadaki, Katerina

    Meta-Learning Object and Agent Dynamics with Hypernetworks

    2021  

    Abstract: We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent's interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of ... ...

    Abstract We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent's interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system. Physical and visual properties of the environment that are not part of the low-dimensional state yet affect its temporal dynamics are inferred from the interaction history and visual observations, and are implicitly captured in the generated parameters. We test HyperDynamics on a set of object pushing and locomotion tasks. It outperforms existing dynamics models in the literature that adapt to environment variations by learning dynamics over high dimensional visual observations, capturing the interactions of the agent in recurrent state representations, or using gradient-based meta-optimization. We also show our method matches the performance of an ensemble of separately trained experts, while also being able to generalize well to unseen environment variations at test time. We attribute its good performance to the multiplicative interactions between the inferred system properties -- captured in the generated parameters -- and the low-dimensional state representation of the dynamical system.
    Schlagwörter Computer Science - Robotics ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 612
    Erscheinungsdatum 2021-03-17
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: When More Data Hurts

    Stengel-Eskin, Elias / Platanios, Emmanouil Antonios / Pauls, Adam / Thomson, Sam / Fang, Hao / Van Durme, Benjamin / Eisner, Jason / Su, Yu

    A Troubling Quirk in Developing Broad-Coverage Natural Language Understanding Systems

    2022  

    Abstract: In natural language understanding (NLU) production systems, users' evolving needs necessitate the addition of new features over time, indexed by new symbols added to the meaning representation space. This requires additional training data and results in ... ...

    Abstract In natural language understanding (NLU) production systems, users' evolving needs necessitate the addition of new features over time, indexed by new symbols added to the meaning representation space. This requires additional training data and results in ever-growing datasets. We present the first systematic investigation into this incremental symbol learning scenario. Our analyses reveal a troubling quirk in building (broad-coverage) NLU systems: as the training dataset grows, more data is needed to learn new symbols, forming a vicious cycle. We show that this trend holds for multiple mainstream models on two common NLU tasks: intent recognition and semantic parsing. Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows. Selectively dropping training examples to prevent dilution often reverses the trend, showing the over-reliance of mainstream neural NLU models on simple lexical cues and their lack of contextual understanding.

    Comment: 15 pages
    Schlagwörter Computer Science - Computation and Language
    Erscheinungsdatum 2022-05-24
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang