LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Suchergebnis

Treffer 1 - 10 von insgesamt 82

Suchoptionen

  1. Artikel ; Online: The SMART Text2FHIR Pipeline.

    Miller, Timothy A / McMurry, Andrew J / Jones, James / Gottlieb, Daniel / Mandl, Kenneth D

    AMIA ... Annual Symposium proceedings. AMIA Symposium

    2024  Band 2023, Seite(n) 514–520

    Abstract: ... ...

    Abstract Objective
    Mesh-Begriff(e) Humans ; Electronic Health Records ; Delivery of Health Care ; Natural Language Processing ; APACHE
    Sprache Englisch
    Erscheinungsdatum 2024-01-11
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ISSN 1942-597X
    ISSN (online) 1942-597X
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: Simplified Neural Unsupervised Domain Adaptation

    Miller, Timothy A

    2019  

    Abstract: Unsupervised domain adaptation (UDA) is the task of modifying a statistical model trained on labeled data from a source domain to achieve better performance on data from a target domain, with access to only unlabeled data in the target domain. Existing ... ...

    Abstract Unsupervised domain adaptation (UDA) is the task of modifying a statistical model trained on labeled data from a source domain to achieve better performance on data from a target domain, with access to only unlabeled data in the target domain. Existing state-of-the-art UDA approaches use neural networks to learn representations that can predict the values of subset of important features called "pivot features." In this work, we show that it is possible to improve on these methods by jointly training the representation learner with the task learner, and examine the importance of existing pivot selection methods.

    Comment: To be presented at NAACL 2019
    Schlagwörter Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Erscheinungsdatum 2019-05-22
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  3. Artikel ; Online: Improving FDA postmarket adverse event reporting for medical devices.

    Wunnava, Susmitha / Miller, Timothy A / Bourgeois, Florence T

    BMJ evidence-based medicine

    2022  Band 28, Heft 2, Seite(n) 83–84

    Sprache Englisch
    Erscheinungsdatum 2022-02-17
    Erscheinungsland England
    Dokumenttyp Journal Article
    ISSN 2515-4478
    ISSN (online) 2515-4478
    DOI 10.1136/bmjebm-2021-111870
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  4. Artikel ; Online: Moving Biosurveillance Beyond Coded Data Using AI for Symptom Detection From Physician Notes: Retrospective Cohort Study.

    McMurry, Andrew J / Zipursky, Amy R / Geva, Alon / Olson, Karen L / Jones, James R / Ignatov, Vladimir / Miller, Timothy A / Mandl, Kenneth D

    Journal of medical Internet research

    2024  Band 26, Seite(n) e53367

    Abstract: Background: Real-time surveillance of emerging infectious diseases necessitates a dynamically evolving, computable case definition, which frequently incorporates symptom-related criteria. For symptom detection, both population health monitoring ... ...

    Abstract Background: Real-time surveillance of emerging infectious diseases necessitates a dynamically evolving, computable case definition, which frequently incorporates symptom-related criteria. For symptom detection, both population health monitoring platforms and research initiatives primarily depend on structured data extracted from electronic health records.
    Objective: This study sought to validate and test an artificial intelligence (AI)-based natural language processing (NLP) pipeline for detecting COVID-19 symptoms from physician notes in pediatric patients. We specifically study patients presenting to the emergency department (ED) who can be sentinel cases in an outbreak.
    Methods: Subjects in this retrospective cohort study are patients who are 21 years of age and younger, who presented to a pediatric ED at a large academic children's hospital between March 1, 2020, and May 31, 2022. The ED notes for all patients were processed with an NLP pipeline tuned to detect the mention of 11 COVID-19 symptoms based on Centers for Disease Control and Prevention (CDC) criteria. For a gold standard, 3 subject matter experts labeled 226 ED notes and had strong agreement (F
    Results: There were 85,678 ED encounters during the study period, including 4% (n=3420) with patients with COVID-19. NLP was more accurate at identifying encounters with patients that had any of the COVID-19 symptoms (F
    Conclusions: This study establishes the value of AI-based NLP as a highly effective tool for real-time COVID-19 symptom detection in pediatric patients, outperforming traditional ICD-10 methods. It also reveals the evolving nature of symptom prevalence across different virus variants, underscoring the need for dynamic, technology-driven approaches in infectious disease surveillance.
    Mesh-Begriff(e) United States ; Humans ; Child ; Artificial Intelligence ; Biosurveillance ; Retrospective Studies ; COVID-19/diagnosis ; COVID-19/epidemiology ; Physicians ; SARS-CoV-2
    Sprache Englisch
    Erscheinungsdatum 2024-04-04
    Erscheinungsland Canada
    Dokumenttyp Journal Article
    ZDB-ID 2028830-X
    ISSN 1438-8871 ; 1438-8871
    ISSN (online) 1438-8871
    ISSN 1438-8871
    DOI 10.2196/53367
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  5. Artikel: The SMART Text2FHIR Pipeline.

    Miller, Timothy A / McMurry, Andrew J / Jones, James / Gottlieb, Daniel / Mandl, Kenneth D

    medRxiv : the preprint server for health sciences

    2023  

    Abstract: Objective: To implement an open source, free, and easily deployable high throughput natural language processing module to extract concepts from clinician notes and map them to Fast Healthcare Interoperability Resources (FHIR).: Materials and methods: ...

    Abstract Objective: To implement an open source, free, and easily deployable high throughput natural language processing module to extract concepts from clinician notes and map them to Fast Healthcare Interoperability Resources (FHIR).
    Materials and methods: Using a popular open-source NLP tool (Apache cTAKES), we create FHIR resources that use modifier extensions to represent negation and NLP sourcing, and another extension to represent provenance of extracted concepts.
    Results: The SMART Text2FHIR Pipeline is an open-source tool, released through standard package managers, and publicly available container images that implement the mappings, enabling ready conversion of clinical text to FHIR.
    Discussion: With the increased data liquidity because of new interoperability regulations, NLP processes that can output FHIR can enable a common language for transporting structured and unstructured data. This framework can be valuable for critical public health or clinical research use cases.
    Conclusion: Future work should include mapping more categories of NLP-extracted information into FHIR resources and mappings from additional open-source NLP tools.
    Sprache Englisch
    Erscheinungsdatum 2023-03-27
    Erscheinungsland United States
    Dokumenttyp Preprint
    DOI 10.1101/2023.03.21.23287499
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  6. Artikel: Improving the Transferability of Clinical Note Section Classification Models with BERT and Large Language Model Ensembles.

    Zhou, Weipeng / Dligach, Dmitriy / Afshar, Majid / Gao, Yanjun / Miller, Timothy A

    Proceedings of the conference. Association for Computational Linguistics. Meeting

    2023  Band 2023, Seite(n) 125–130

    Abstract: Text in electronic health records is organized into sections, and classifying those sections into section categories is useful for downstream tasks. In this work, we attempt to improve the transferability of section classification models by combining the ...

    Abstract Text in electronic health records is organized into sections, and classifying those sections into section categories is useful for downstream tasks. In this work, we attempt to improve the transferability of section classification models by combining the dataset-specific knowledge in supervised learning models with the world knowledge inside large language models (LLMs). Surprisingly, we find that zero-shot LLMs out-perform supervised BERT-based models applied to out-of-domain data. We also find that their strengths are synergistic, so that a simple ensemble technique leads to additional performance gains.
    Sprache Englisch
    Erscheinungsdatum 2023-09-29
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ISSN 0736-587X
    ISSN 0736-587X
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  7. Artikel: Improving Model Transferability for Clinical Note Section Classification Models Using Continued Pretraining.

    Zhou, Weipeng / Yetisgen, Meliha / Afshar, Majid / Gao, Yanjun / Savova, Guergana / Miller, Timothy A

    medRxiv : the preprint server for health sciences

    2023  

    Abstract: Objective: The classification of clinical note sections is a critical step before doing more fine-grained natural language processing tasks such as social determinants of health extraction and temporal information extraction. Often, clinical note ... ...

    Abstract Objective: The classification of clinical note sections is a critical step before doing more fine-grained natural language processing tasks such as social determinants of health extraction and temporal information extraction. Often, clinical note section classification models that achieve high accuracy for one institution experience a large drop of accuracy when transferred to another institution. The objective of this study is to develop methods that classify clinical note sections under the SOAP ("Subjective", "Object", "Assessment" and "Plan") framework with improved transferability.
    Materials and methods: We trained the baseline models by fine-tuning BERT-based models, and enhanced their transferability with continued pretraining, including domain adaptive pretraining (DAPT) and task adaptive pretraining (TAPT). We added out-of-domain annotated samples during fine-tuning and observed model performance over a varying number of annotated sample size. Finally, we quantified the impact of continued pretraining in equivalence of the number of in-domain annotated samples added.
    Results: We found continued pretraining improved models only when combined with in-domain annotated samples, improving the F1 score from 0.756 to 0.808, averaged across three datasets. This improvement was equivalent to adding 50.2 in-domain annotated samples.
    Discussion: Although considered a straightforward task when performing in-domain, section classification is still a considerably difficult task when performing cross-domain, even using highly sophisticated neural network-based methods.
    Conclusion: Continued pretraining improved model transferability for cross-domain clinical note section classification in the presence of a small amount of in-domain labeled samples.
    Sprache Englisch
    Erscheinungsdatum 2023-04-24
    Erscheinungsland United States
    Dokumenttyp Preprint
    DOI 10.1101/2023.04.15.23288628
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: Improving model transferability for clinical note section classification models using continued pretraining.

    Zhou, Weipeng / Yetisgen, Meliha / Afshar, Majid / Gao, Yanjun / Savova, Guergana / Miller, Timothy A

    Journal of the American Medical Informatics Association : JAMIA

    2023  Band 31, Heft 1, Seite(n) 89–97

    Abstract: Objective: The classification of clinical note sections is a critical step before doing more fine-grained natural language processing tasks such as social determinants of health extraction and temporal information extraction. Often, clinical note ... ...

    Abstract Objective: The classification of clinical note sections is a critical step before doing more fine-grained natural language processing tasks such as social determinants of health extraction and temporal information extraction. Often, clinical note section classification models that achieve high accuracy for 1 institution experience a large drop of accuracy when transferred to another institution. The objective of this study is to develop methods that classify clinical note sections under the SOAP ("Subjective," "Object," "Assessment," and "Plan") framework with improved transferability.
    Materials and methods: We trained the baseline models by fine-tuning BERT-based models, and enhanced their transferability with continued pretraining, including domain-adaptive pretraining and task-adaptive pretraining. We added in-domain annotated samples during fine-tuning and observed model performance over a varying number of annotated sample size. Finally, we quantified the impact of continued pretraining in equivalence of the number of in-domain annotated samples added.
    Results: We found continued pretraining improved models only when combined with in-domain annotated samples, improving the F1 score from 0.756 to 0.808, averaged across 3 datasets. This improvement was equivalent to adding 35 in-domain annotated samples.
    Discussion: Although considered a straightforward task when performing in-domain, section classification is still a considerably difficult task when performing cross-domain, even using highly sophisticated neural network-based methods.
    Conclusion: Continued pretraining improved model transferability for cross-domain clinical note section classification in the presence of a small amount of in-domain labeled samples.
    Mesh-Begriff(e) Health Facilities ; Information Storage and Retrieval ; Natural Language Processing ; Neural Networks, Computer ; Sample Size
    Sprache Englisch
    Erscheinungsdatum 2023-09-27
    Erscheinungsland England
    Dokumenttyp Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 1205156-1
    ISSN 1527-974X ; 1067-5027
    ISSN (online) 1527-974X
    ISSN 1067-5027
    DOI 10.1093/jamia/ocad190
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  9. Buch ; Online: Considerations for health care institutions training large language models on electronic health records

    Zhou, Weipeng / Bitterman, Danielle / Afshar, Majid / Miller, Timothy A.

    2023  

    Abstract: Large language models (LLMs) like ChatGPT have excited scientists across fields; in medicine, one source of excitement is the potential applications of LLMs trained on electronic health record (EHR) data. But there are tough questions we must first ... ...

    Abstract Large language models (LLMs) like ChatGPT have excited scientists across fields; in medicine, one source of excitement is the potential applications of LLMs trained on electronic health record (EHR) data. But there are tough questions we must first answer if health care institutions are interested in having LLMs trained on their own data; should they train an LLM from scratch or fine-tune it from an open-source model? For healthcare institutions with a predefined budget, what are the biggest LLMs they can afford? In this study, we take steps towards answering these questions with an analysis on dataset sizes, model sizes, and costs for LLM training using EHR data. This analysis provides a framework for thinking about these questions in terms of data scale, compute scale, and training budgets.
    Schlagwörter Computer Science - Computers and Society ; Computer Science - Artificial Intelligence ; Computer Science - Computation and Language
    Erscheinungsdatum 2023-08-23
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  10. Artikel ; Online: Experiences implementing scalable, containerized, cloud-based NLP for extracting biobank participant phenotypes at scale.

    Miller, Timothy A / Avillach, Paul / Mandl, Kenneth D

    JAMIA open

    2020  Band 3, Heft 2, Seite(n) 185–189

    Abstract: Objective: To develop scalable natural language processing (NLP) infrastructure for processing the free text in electronic health records (EHRs).: Materials and methods: We extend the open-source Apache cTAKES NLP software with several standard ... ...

    Abstract Objective: To develop scalable natural language processing (NLP) infrastructure for processing the free text in electronic health records (EHRs).
    Materials and methods: We extend the open-source Apache cTAKES NLP software with several standard technologies for scalability. We remove processing bottlenecks by monitoring component queue size. We process EHR free text for patients in the PrecisionLink Biobank at Boston Children's Hospital. The extracted concepts are made searchable via a web-based portal.
    Results: We processed over 1.2 million notes for over 8000 patients, extracting 154 million concepts. Our largest tested configuration processes over 1 million notes per day.
    Discussion: The unique information represented by extracted NLP concepts has great potential to provide a more complete picture of patient status.
    Conclusion: NLP large EHR document collections can be done efficiently, in service of high throughput phenotyping.
    Sprache Englisch
    Erscheinungsdatum 2020-05-22
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ISSN 2574-2531
    ISSN (online) 2574-2531
    DOI 10.1093/jamiaopen/ooaa016
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang