LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 2 of total 2

Search options

  1. Article ; Online: Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks.

    Luo, Ling / Ning, Jinzhong / Zhao, Yingwen / Wang, Zhijun / Ding, Zeyuan / Chen, Peng / Fu, Weiru / Han, Qinyu / Xu, Guangtao / Qiu, Yunzhi / Pan, Dinghao / Li, Jiru / Li, Hao / Feng, Wenduo / Tu, Senbo / Liu, Yuqi / Yang, Zhihao / Wang, Jian / Sun, Yuanyuan /
    Lin, Hongfei

    Journal of the American Medical Informatics Association : JAMIA

    2024  

    Abstract: Objective: Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical ...

    Abstract Objective: Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical natural language processing (NLP) tasks in different languages, we present Taiyi, a bilingual fine-tuned LLM for diverse biomedical NLP tasks.
    Materials and methods: We first curated a comprehensive collection of 140 existing biomedical text mining datasets (102 English and 38 Chinese datasets) across over 10 task types. Subsequently, these corpora were converted to the instruction data used to fine-tune the general LLM. During the supervised fine-tuning phase, a 2-stage strategy is proposed to optimize the model performance across various tasks.
    Results: Experimental results on 13 test sets, which include named entity recognition, relation extraction, text classification, and question answering tasks, demonstrate that Taiyi achieves superior performance compared to general LLMs. The case study involving additional biomedical NLP tasks further shows Taiyi's considerable potential for bilingual biomedical multitasking.
    Conclusion: Leveraging rich high-quality biomedical corpora and developing effective fine-tuning strategies can significantly improve the performance of LLMs within the biomedical domain. Taiyi shows the bilingual multitasking capability through supervised fine-tuning. However, those tasks such as information extraction that are not generation tasks in nature remain challenging for LLM-based generative approaches, and they still underperform the conventional discriminative approaches using smaller language models.
    Language English
    Publishing date 2024-02-29
    Publishing country England
    Document type Journal Article
    ZDB-ID 1205156-1
    ISSN 1527-974X ; 1067-5027
    ISSN (online) 1527-974X
    ISSN 1067-5027
    DOI 10.1093/jamia/ocae037
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Taiyi

    Luo, Ling / Ning, Jinzhong / Zhao, Yingwen / Wang, Zhijun / Ding, Zeyuan / Chen, Peng / Fu, Weiru / Han, Qinyu / Xu, Guangtao / Qiu, Yunzhi / Pan, Dinghao / Li, Jiru / Li, Hao / Feng, Wenduo / Tu, Senbo / Liu, Yuqi / Yang, Zhihao / Wang, Jian / Sun, Yuanyuan /
    Lin, Hongfei

    A Bilingual Fine-Tuned Large Language Model for Diverse Biomedical Tasks

    2023  

    Abstract: Objective: Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical ... ...

    Abstract Objective: Most existing fine-tuned biomedical large language models (LLMs) focus on enhancing performance in monolingual biomedical question answering and conversation tasks. To investigate the effectiveness of the fine-tuned LLMs on diverse biomedical NLP tasks in different languages, We present Taiyi, a bilingual fine-tuned LLM for diverse biomedical tasks. Materials and Methods: We first curated a comprehensive collection of 140 existing biomedical text mining datasets (102 English and 38 Chinese datasets) across over 10 task types. Subsequently, a two-stage strategy is proposed for supervised fine-tuning to optimize the model performance across varied tasks. Results: Experimental results on 13 test sets covering named entity recognition, relation extraction, text classification, question answering tasks demonstrate that Taiyi achieves superior performance compared to general LLMs. The case study involving additional biomedical NLP tasks further shows Taiyi's considerable potential for bilingual biomedical multi-tasking. Conclusion: Leveraging rich high-quality biomedical corpora and developing effective fine-tuning strategies can significantly improve the performance of LLMs within the biomedical domain. Taiyi shows the bilingual multi-tasking capability through supervised fine-tuning. However, those tasks such as information extraction that are not generation tasks in nature remain challenging for LLM-based generative approaches, and they still underperform the conventional discriminative approaches of smaller language models.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence
    Subject code 400
    Publishing date 2023-11-20
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top