LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 5914

Search options

  1. Book ; Online: Audio ALBERT

    Chi, Po-Han / Chung, Pei-Hung / Wu, Tsung-Han / Hsieh, Chun-Cheng / Chen, Yen-Hao / Li, Shang-Wen / Lee, Hung-yi

    A Lite BERT for Self-supervised Learning of Audio Representation

    2020  

    Abstract: ... training in order to achieve better performance. In this paper, we propose Audio ALBERT, a lite version ... speaker identification, and phoneme classification. We show that Audio ALBERT is capable of achieving ...

    Abstract For self-supervised speech processing, it is crucial to use pretrained models as speech representation extractors. In recent works, increasing the size of the model has been utilized in acoustic model training in order to achieve better performance. In this paper, we propose Audio ALBERT, a lite version of the self-supervised speech representation model. We use the representations with two downstream tasks, speaker identification, and phoneme classification. We show that Audio ALBERT is capable of achieving competitive performance with those huge models in the downstream tasks while utilizing 91\% fewer parameters. Moreover, we use some simple probing models to measure how much the information of the speaker and phoneme is encoded in latent representations. In probing experiments, we find that the latent representations encode richer information of both phoneme and speaker than that of the last layer.

    Comment: Accepted by IEEE Spoken Language Technology Workshop 2021
    Keywords Electrical Engineering and Systems Science - Audio and Speech Processing ; Computer Science - Computation and Language ; Computer Science - Sound
    Subject code 006
    Publishing date 2020-05-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Article: Chinese Clinical Named Entity Recognition with ALBERT and MHA Mechanism.

    Li, Dongmei / Long, Jiao / Qu, Jintao / Zhang, Xiaoping

    Evidence-based complementary and alternative medicine : eCAM

    2022  Volume 2022, Page(s) 2056039

    Abstract: ... on ALBERT and a multihead attention (MHA) mechanism to solve this problem. Structurally, the model first ... obtains character-level word embeddings through the ALBERT pretraining language model, then inputs ...

    Abstract Traditional clinical named entity recognition methods fail to balance the effectiveness of feature extraction of unstructured text and the complexity of neural network models. We propose a model based on ALBERT and a multihead attention (MHA) mechanism to solve this problem. Structurally, the model first obtains character-level word embeddings through the ALBERT pretraining language model, then inputs the word embeddings into the iterated dilated convolutional neural network model to quickly extract global semantic information, and decodes the predicted labels through conditional random fields to obtain the optimal label sequence. Also, we apply the MHA mechanism to capture intercharacter dependencies from multiple aspects. Furthermore, we use the RAdam optimizer to boost the convergence speed and improve the generalization ability of our model. Experimental results show that our model achieves an F1 score of 85.63% on the CCKS-2019 dataset-an increase of 4.36% compared to the baseline model.
    Language English
    Publishing date 2022-05-23
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2171158-6
    ISSN 1741-4288 ; 1741-427X
    ISSN (online) 1741-4288
    ISSN 1741-427X
    DOI 10.1155/2022/2056039
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Ensemble ALBERT on SQuAD 2.0

    Li, Shilun / Li, Renee / Peng, Veronica

    2021  

    Abstract: ... from Transformers (BERT) and A Lite BERT (ALBERT) have attracted lots of attention due to their great performance ... in a wide range of NLP tasks. In our Paper, we utilized the fine-tuned ALBERT models and implemented ... on top of ALBERT-base model, and two other models based on ALBERT-xlarge and ALBERT-xxlarge. We compared ...

    Abstract Machine question answering is an essential yet challenging task in natural language processing. Recently, Pre-trained Contextual Embeddings (PCE) models like Bidirectional Encoder Representations from Transformers (BERT) and A Lite BERT (ALBERT) have attracted lots of attention due to their great performance in a wide range of NLP tasks. In our Paper, we utilized the fine-tuned ALBERT models and implemented combinations of additional layers (e.g. attention layer, RNN layer) on top of them to improve model performance on Stanford Question Answering Dataset (SQuAD 2.0). We implemented four different models with different layers on top of ALBERT-base model, and two other models based on ALBERT-xlarge and ALBERT-xxlarge. We compared their performance to our baseline model ALBERT-base-v2 + ALBERT-SQuAD-out with details. Our best-performing individual model is ALBERT-xxlarge + ALBERT-SQuAD-out, which achieved an F1 score of 88.435 on the dev set. Furthermore, we have implemented three different ensemble algorithms to boost overall performance. By passing in several best-performing models' results into our weighted voting ensemble algorithm, our final result ranks first on the Stanford CS224N Test PCE SQuAD Leaderboard with F1 = 90.123.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; I.2.7
    Subject code 006
    Publishing date 2021-10-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Chinese Clinical Named Entity Recognition with ALBERT and MHA Mechanism

    Dongmei Li / Jiao Long / Jintao Qu / Xiaoping Zhang

    Evidence-Based Complementary and Alternative Medicine, Vol

    2022  Volume 2022

    Abstract: ... on ALBERT and a multihead attention (MHA) mechanism to solve this problem. Structurally, the model first ... obtains character-level word embeddings through the ALBERT pretraining language model, then inputs ...

    Abstract Traditional clinical named entity recognition methods fail to balance the effectiveness of feature extraction of unstructured text and the complexity of neural network models. We propose a model based on ALBERT and a multihead attention (MHA) mechanism to solve this problem. Structurally, the model first obtains character-level word embeddings through the ALBERT pretraining language model, then inputs the word embeddings into the iterated dilated convolutional neural network model to quickly extract global semantic information, and decodes the predicted labels through conditional random fields to obtain the optimal label sequence. Also, we apply the MHA mechanism to capture intercharacter dependencies from multiple aspects. Furthermore, we use the RAdam optimizer to boost the convergence speed and improve the generalization ability of our model. Experimental results show that our model achieves an F1 score of 85.63% on the CCKS-2019 dataset—an increase of 4.36% compared to the baseline model.
    Keywords Other systems of medicine ; RZ201-999
    Subject code 006
    Language English
    Publishing date 2022-01-01T00:00:00Z
    Publisher Hindawi Limited
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article: ALBERT-Based Self-Ensemble Model With Semisupervised Learning and Data Augmentation for Clinical Semantic Textual Similarity Calculation: Algorithm Validation Study.

    Li, Junyi / Zhang, Xuejie / Zhou, Xiaobing

    JMIR medical informatics

    2021  Volume 9, Issue 1, Page(s) e23086

    Abstract: ... information.: Methods: This paper combines a text data augmentation method and a self-ensemble ALBERT model ...

    Abstract Background: In recent years, with increases in the amount of information available and the importance of information screening, increased attention has been paid to the calculation of textual semantic similarity. In the field of medicine, electronic medical records and medical research documents have become important data resources for clinical research. Medical textual semantic similarity calculation has become an urgent problem to be solved.
    Objective: This research aims to solve 2 problems-(1) when the size of medical data sets is small, leading to insufficient learning with understanding of the models and (2) when information is lost in the process of long-distance propagation, causing the models to be unable to grasp key information.
    Methods: This paper combines a text data augmentation method and a self-ensemble ALBERT model under semisupervised learning to perform clinical textual semantic similarity calculations.
    Results: Compared with the methods in the 2019 National Natural Language Processing Clinical Challenges Open Health Natural Language Processing shared task Track on Clinical Semantic Textual Similarity, our method surpasses the best result by 2 percentage points and achieves a Pearson correlation coefficient of 0.92.
    Conclusions: When the size of medical data set is small, data augmentation can increase the size of the data set and improved semisupervised learning can boost the learning efficiency of the model. Additionally, self-ensemble methods improve the model performance. Our method had excellent performance and has great potential to improve related medical problems.
    Language English
    Publishing date 2021-01-22
    Publishing country Canada
    Document type Journal Article
    ZDB-ID 2798261-0
    ISSN 2291-9694
    ISSN 2291-9694
    DOI 10.2196/23086
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: ALBERT-Based Self-Ensemble Model With Semisupervised Learning and Data Augmentation for Clinical Semantic Textual Similarity Calculation

    Li, Junyi / Zhang, Xuejie / Zhou, Xiaobing

    JMIR Medical Informatics, Vol 9, Iss 1, p e

    Algorithm Validation Study

    2021  Volume 23086

    Abstract: ... information. MethodsThis paper combines a text data augmentation method and a self-ensemble ALBERT model under ...

    Abstract BackgroundIn recent years, with increases in the amount of information available and the importance of information screening, increased attention has been paid to the calculation of textual semantic similarity. In the field of medicine, electronic medical records and medical research documents have become important data resources for clinical research. Medical textual semantic similarity calculation has become an urgent problem to be solved. ObjectiveThis research aims to solve 2 problems—(1) when the size of medical data sets is small, leading to insufficient learning with understanding of the models and (2) when information is lost in the process of long-distance propagation, causing the models to be unable to grasp key information. MethodsThis paper combines a text data augmentation method and a self-ensemble ALBERT model under semisupervised learning to perform clinical textual semantic similarity calculations. ResultsCompared with the methods in the 2019 National Natural Language Processing Clinical Challenges Open Health Natural Language Processing shared task Track on Clinical Semantic Textual Similarity, our method surpasses the best result by 2 percentage points and achieves a Pearson correlation coefficient of 0.92. ConclusionsWhen the size of medical data set is small, data augmentation can increase the size of the data set and improved semisupervised learning can boost the learning efficiency of the model. Additionally, self-ensemble methods improve the model performance. Our method had excellent performance and has great potential to improve related medical problems.
    Keywords Computer applications to medicine. Medical informatics ; R858-859.7
    Subject code 006
    Language English
    Publishing date 2021-01-01T00:00:00Z
    Publisher JMIR Publications
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Article ; Online: The corpus callosum of Albert Einstein's brain: another clue to his high intelligence?

    Men, Weiwei / Falk, Dean / Sun, Tao / Chen, Weibo / Li, Jianqi / Yin, Dazhi / Zang, Lili / Fan, Mingxia

    Brain : a journal of neurology

    2013  Volume 137, Issue Pt 4, Page(s) e268

    MeSH term(s) Corpus Callosum/anatomy & histology ; Famous Persons ; Humans ; Intelligence
    Language English
    Publishing date 2013-09-24
    Publishing country England
    Document type Letter ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't
    ZDB-ID 80072-7
    ISSN 1460-2156 ; 0006-8950
    ISSN (online) 1460-2156
    ISSN 0006-8950
    DOI 10.1093/brain/awt252
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: The 2015 Albert Lasker Basic Medical Research Award: An exhilarating journey to the DNA damage checkpoint.

    Zou, Lee / Li, Lei

    Science China. Life sciences

    2016  Volume 59, Issue 1, Page(s) 103–105

    MeSH term(s) Awards and Prizes ; Cell Cycle Checkpoints ; DNA Damage ; Escherichia coli/cytology ; Escherichia coli/genetics ; Escherichia coli/metabolism ; History, 20th Century ; History, 21st Century ; Humans ; United States
    Language English
    Publishing date 2016-01
    Publishing country China
    Document type Biography ; Historical Article ; News ; Portraits
    ISSN 1869-1889
    ISSN (online) 1869-1889
    DOI 10.1007/s11427-015-4984-3
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: The 2016 Albert Lasker Basic Medical Research Award: Oxygen sensing-a mysterious process essential for survival.

    Li, Ziru / Zhang, Weizhen

    Science China. Life sciences

    2016  Volume 59, Issue 11, Page(s) 1195–1197

    MeSH term(s) Adaptation, Physiological ; Awards and Prizes ; Biomedical Research ; Erythropoietin/metabolism ; Humans ; Hypoxia ; Hypoxia-Inducible Factor 1, alpha Subunit/metabolism ; Models, Biological ; Oxygen/metabolism ; Signal Transduction ; Von Hippel-Lindau Tumor Suppressor Protein/metabolism
    Chemical Substances HIF1A protein, human ; Hypoxia-Inducible Factor 1, alpha Subunit ; Erythropoietin (11096-26-7) ; Von Hippel-Lindau Tumor Suppressor Protein (EC 2.3.2.27) ; VHL protein, human (EC 6.3.2.-) ; Oxygen (S88TT14065)
    Language English
    Publishing date 2016-11
    Publishing country China
    Document type News
    ISSN 1869-1889
    ISSN (online) 1869-1889
    DOI 10.1007/s11427-016-0314-3
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Air-water CO2 outgassing in the Lower Lakes (Alexandrina and Albert, Australia) following a millennium drought.

    Li, Siyue / Bush, Richard T / Ward, Nicholas J / Sullivan, Leigh A / Dong, Fangyong

    The Science of the total environment

    2016  Volume 542, Issue Pt A, Page(s) 453–468

    Abstract: ... Alexandrina and Lake Albert, South Australia), during drought (2007 to September-2010) and post-drought ... Lakes Alexandrina and Albert were a source of CO2 to the atmosphere during the drought period ...

    Abstract Lakes are an important source and sink of atmospheric CO2, and thus are a vital component of the global carbon cycle. However, with scarce data on potentially important subtropical and tropical areas for whole continents such as Australia, the magnitude of large-scale lake CO2 emissions is unclear. This study presents spatiotemporal changes of dissolved inorganic carbon and water - to - air interface CO2 flux in the two of Australia's largest connected, yet geomorphically different freshwater lakes (Lake Alexandrina and Lake Albert, South Australia), during drought (2007 to September-2010) and post-drought (October 2010 to 2013). Lake levels in the extreme drought were on average approximately 1m lower than long-term average (0.71 m AHD). Drought was associated with an increase in the concentrations of dissolved inorganic species, organic carbon, nitrogen, Chl-a and major ions, as well as water acidification as a consequence of acid sulfate soil (ASS) exposure, and hence, had profound effects on lake pCO2 concentrations. Lakes Alexandrina and Albert were a source of CO2 to the atmosphere during the drought period, with efflux ranging from 0.3 to 7.0 mmol/m(2)/d. The lake air-water CO2 flux was negative in the post-drought, ranging between -16.4 and 0.9 mmol/m(2)/d. The average annual CO2 emission was estimated at 615.5×10(6) mol CO2/y during the drought period. These calculated emission rates are in the lower range for lakes, despite the potential for drought conditions that shift the lakes from sink to net source for atmospheric CO2. These observations have significant implications in the context of predicted increasing frequency and intensity of drought as a result of climate change. Further information on the spatial and temporal variability in CO2 flux from Australian lakes is urgently warranted to revise the global carbon budget for lakes.
    Language English
    Publishing date 2016-01-15
    Publishing country Netherlands
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 121506-1
    ISSN 1879-1026 ; 0048-9697
    ISSN (online) 1879-1026
    ISSN 0048-9697
    DOI 10.1016/j.scitotenv.2015.10.070
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top