LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 102

Search options

  1. Article ; Online: Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU.

    Fung, Pascale / Etienne, Hubert

    AI and ethics

    2022  Volume 3, Issue 2, Page(s) 505–511

    Abstract: We propose a comparative analysis of the AI ethical guidelines endorsed by China (from the Chinese National New Generation Artificial Intelligence Governance Professional Committee) and by the EU (from the European High-level Expert Group on AI). We show ...

    Abstract We propose a comparative analysis of the AI ethical guidelines endorsed by China (from the Chinese National New Generation Artificial Intelligence Governance Professional Committee) and by the EU (from the European High-level Expert Group on AI). We show that behind an apparent likeness in the concepts mobilized, the two documents largely differ in their normative approaches, which we explain by distinct ambitions resulting from different philosophical traditions, cultural heritages and historical contexts. In highlighting such differences, we show that it is erroneous to believe that a similarity in concepts necessarily translates into a similarity in ethics as even the same words may have different meanings from a country to another-as exemplified by that of "privacy". It would, therefore, be erroneous to believe that the world would have adopted a common set of ethical principles in only three years. China and the EU, however, share a common scientific method, inherited in the former from the "Chinese Enlightenment", which could contribute to better collaboration and understanding in the building of technical standards for the implementation of such ethics principles.
    Language English
    Publishing date 2022-06-20
    Publishing country Switzerland
    Document type Journal Article
    ISSN 2730-5961
    ISSN (online) 2730-5961
    DOI 10.1007/s43681-022-00180-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Improving Fairness and Robustness in End-to-End Speech Recognition through unsupervised clustering

    Veliche, Irina-Elena / Fung, Pascale

    2023  

    Abstract: The challenge of fairness arises when Automatic Speech Recognition (ASR) systems do not perform equally well for all sub-groups of the population. In the past few years there have been many improvements in overall speech recognition quality, but without ... ...

    Abstract The challenge of fairness arises when Automatic Speech Recognition (ASR) systems do not perform equally well for all sub-groups of the population. In the past few years there have been many improvements in overall speech recognition quality, but without any particular focus on advancing Equality and Equity for all user groups for whom systems do not perform well. ASR fairness is therefore also a robustness issue. Meanwhile, data privacy also takes priority in production systems. In this paper, we present a privacy preserving approach to improve fairness and robustness of end-to-end ASR without using metadata, zip codes, or even speaker or utterance embeddings directly in training. We extract utterance level embeddings using a speaker ID model trained on a public dataset, which we then use in an unsupervised fashion to create acoustic clusters. We use cluster IDs instead of speaker utterance embeddings as extra features during model training, which shows improvements for all demographic groups and in particular for different accents.
    Keywords Computer Science - Sound ; Computer Science - Computation and Language ; Computer Science - Machine Learning ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 004
    Publishing date 2023-06-06
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article: ROBOTS WITH HEART.

    Fung, Pascale

    Scientific American

    2015  Volume 313, Issue 5, Page(s) 60–63

    MeSH term(s) Behavior ; Communication ; Emotions ; Empathy ; Humans ; Robotics/instrumentation ; Speech Recognition Software
    Language English
    Publishing date 2015-12-03
    Publishing country United States
    Document type Journal Article
    ZDB-ID 246-x
    ISSN 1946-7087 ; 0036-8733
    ISSN (online) 1946-7087
    ISSN 0036-8733
    DOI 10.1038/scientificamerican1115-60
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: Which One Are You Referring To? Multimodal Object Identification in Situated Dialogue

    Lovenia, Holy / Cahyawijaya, Samuel / Fung, Pascale

    2023  

    Abstract: The demand for multimodal dialogue systems has been rising in various domains, emphasizing the importance of interpreting multimodal inputs from conversational and situational contexts. We explore three methods to tackle this problem and evaluate them on ...

    Abstract The demand for multimodal dialogue systems has been rising in various domains, emphasizing the importance of interpreting multimodal inputs from conversational and situational contexts. We explore three methods to tackle this problem and evaluate them on the largest situated dialogue dataset, SIMMC 2.1. Our best method, scene-dialogue alignment, improves the performance by ~20% F1-score compared to the SIMMC 2.1 baselines. We provide analysis and discussion regarding the limitation of our methods and the potential directions for future works. Our code is publicly available at https://github.com/holylovenia/multimodal-object-identification.

    Comment: Accepted at EACL SRW 2023
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Publishing date 2023-02-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Mitigating Framing Bias with Polarity Minimization Loss

    Bang, Yejin / Lee, Nayeon / Fung, Pascale

    2023  

    Abstract: Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new ...

    Abstract Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).

    Comment: 11 pages, EMNLP2023
    Keywords Computer Science - Computation and Language
    Publishing date 2023-11-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Instruct-Align

    Cahyawijaya, Samuel / Lovenia, Holy / Yu, Tiezheng / Chung, Willy / Fung, Pascale

    Teaching Novel Languages with to LLMs through Alignment-based Cross-Lingual Instruction

    2023  

    Abstract: Instruction-tuned large language models (LLMs) have shown remarkable generalization capability over multiple tasks in multiple languages. Nevertheless, their generalization towards different languages varies especially to underrepresented languages or ... ...

    Abstract Instruction-tuned large language models (LLMs) have shown remarkable generalization capability over multiple tasks in multiple languages. Nevertheless, their generalization towards different languages varies especially to underrepresented languages or even to unseen languages. Prior works on adapting new languages to LLMs find that naively adapting new languages to instruction-tuned LLMs will result in catastrophic forgetting, which in turn causes the loss of multitasking ability in these LLMs. To tackle this, we propose the Instruct-Align a.k.a (IA)$^1$ framework, which enables instruction-tuned LLMs to learn cross-lingual alignment between unseen and previously learned languages via alignment-based cross-lingual instruction-tuning. Our preliminary result on BLOOMZ-560M shows that (IA)$^1$ is able to learn a new language effectively with only a limited amount of parallel data and at the same time prevent catastrophic forgetting by applying continual instruction-tuning through experience replay. Our work contributes to the progression of language adaptation methods for instruction-tuned LLMs and opens up the possibility of adapting underrepresented low-resource languages into existing instruction-tuned LLMs. Our code will be publicly released upon acceptance.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence
    Publishing date 2023-05-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models

    Lovenia, Holy / Dai, Wenliang / Cahyawijaya, Samuel / Ji, Ziwei / Fung, Pascale

    2023  

    Abstract: Object hallucination poses a significant challenge in vision-language (VL) models, often leading to the generation of nonsensical or unfaithful responses with non-existent objects. However, the absence of a general measurement for evaluating object ... ...

    Abstract Object hallucination poses a significant challenge in vision-language (VL) models, often leading to the generation of nonsensical or unfaithful responses with non-existent objects. However, the absence of a general measurement for evaluating object hallucination in VL models has hindered our understanding and ability to mitigate this issue. In this work, we present NOPE (Negative Object Presence Evaluation), a novel benchmark designed to assess object hallucination in VL models through visual question answering (VQA). We propose a cost-effective and scalable approach utilizing large language models to generate 29.5k synthetic negative pronoun (NegP) data of high quality for NOPE. We extensively investigate the performance of 10 state-of-the-art VL models in discerning the non-existence of objects in visual questions, where the ground truth answers are denoted as NegP (e.g., "none"). Additionally, we evaluate their standard performance on visual questions on 9 other VQA datasets. Through our experiments, we demonstrate that no VL model is immune to the vulnerability of object hallucination, as all models achieve accuracy below 10\% on NegP. Furthermore, we uncover that lexically diverse visual questions, question types with large scopes, and scene-relevant objects capitalize the risk of object hallucination in VL models.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Computation and Language
    Subject code 004
    Publishing date 2023-10-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: InstructTODS

    Chung, Willy / Cahyawijaya, Samuel / Wilie, Bryan / Lovenia, Holy / Fung, Pascale

    Large Language Models for End-to-End Task-Oriented Dialogue Systems

    2023  

    Abstract: Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP), yet remain under-explored for task-oriented dialogue systems (TODS), especially for end-to-end TODS. We present InstructTODS, a novel off-the-shelf ... ...

    Abstract Large language models (LLMs) have been used for diverse tasks in natural language processing (NLP), yet remain under-explored for task-oriented dialogue systems (TODS), especially for end-to-end TODS. We present InstructTODS, a novel off-the-shelf framework for zero-shot end-to-end task-oriented dialogue systems that can adapt to diverse domains without fine-tuning. By leveraging LLMs, InstructTODS generates a proxy belief state that seamlessly translates user intentions into dynamic queries for efficient interaction with any KB. Our extensive experiments demonstrate that InstructTODS achieves comparable performance to fully fine-tuned TODS in guiding dialogues to successful completion without prior knowledge or task-specific data. Furthermore, a rigorous human evaluation of end-to-end TODS shows that InstructTODS produces dialogue responses that notably outperform both the gold responses and the state-of-the-art TODS in terms of helpfulness, informativeness, and humanness. Moreover, the effectiveness of LLMs in TODS is further supported by our comprehensive evaluations on TODS subtasks: dialogue state tracking, intent classification, and response generation. Code and implementations could be found here https://github.com/WillyHC22/InstructTODS/
    Keywords Computer Science - Computation and Language
    Subject code 004
    Publishing date 2023-10-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Plausible May Not Be Faithful

    Dai, Wenliang / Liu, Zihan / Ji, Ziwei / Su, Dan / Fung, Pascale

    Probing Object Hallucination in Vision-Language Pre-training

    2022  

    Abstract: Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information. In this paper, we systematically study the object hallucination problem from three aspects. First, ...

    Abstract Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information. In this paper, we systematically study the object hallucination problem from three aspects. First, we examine recent state-of-the-art VLP models, showing that they still hallucinate frequently, and models achieving better scores on standard metrics (e.g., CIDEr) could be more unfaithful. Second, we investigate how different types of image encoding in VLP influence hallucination, including region-based, grid-based, and patch-based. Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination. Third, we decouple various VLP objectives and demonstrate that token-level image-text alignment and controlled generation are crucial to reducing hallucination. Based on that, we propose a simple yet effective VLP loss named ObjMLM to further mitigate object hallucination. Results show that it reduces object hallucination by up to 17.4% when tested on two benchmarks (COCO Caption for in-domain and NoCaps for out-of-domain evaluation).

    Comment: Accepted at EACL 2023
    Keywords Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-10-14
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Contrastive Learning for Inference in Dialogue

    Ishii, Etsuko / Xu, Yan / Wilie, Bryan / Ji, Ziwei / Lovenia, Holy / Chung, Willy / Fung, Pascale

    2023  

    Abstract: Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker. While recent large language models show remarkable advances in ... ...

    Abstract Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker. While recent large language models show remarkable advances in inference tasks, their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning. In this paper, we analyze the behavior of the models based on the task difficulty defined by the semantic information gap -- which distinguishes inductive and deductive reasoning (Johnson-Laird, 1988, 1993). Our analysis reveals that the disparity in information between dialogue contexts and desired inferences poses a significant challenge to the inductive inference process. To mitigate this information gap, we investigate a contrastive learning approach by feeding negative samples. Our experiments suggest negative samples help models understand what is wrong and improve their inference generations.

    Comment: Accepted to EMNLP2023
    Keywords Computer Science - Computation and Language
    Subject code 160
    Publishing date 2023-10-19
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top