LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 142

Search options

  1. Article ; Online: Presentation matters for AI-generated clinical advice.

    Ghassemi, Marzyeh

    Nature human behaviour

    2023  Volume 7, Issue 11, Page(s) 1833–1835

    Language English
    Publishing date 2023-11-20
    Publishing country England
    Document type Journal Article
    ISSN 2397-3374
    ISSN (online) 2397-3374
    DOI 10.1038/s41562-023-01721-7
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Machine learning and health need better values.

    Ghassemi, Marzyeh / Mohamed, Shakir

    NPJ digital medicine

    2022  Volume 5, Issue 1, Page(s) 51

    Language English
    Publishing date 2022-04-22
    Publishing country England
    Document type Journal Article
    ISSN 2398-6352
    ISSN (online) 2398-6352
    DOI 10.1038/s41746-022-00595-9
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: In medicine, how do we machine learn anything real?

    Ghassemi, Marzyeh / Nsoesie, Elaine Okanyene

    Patterns (New York, N.Y.)

    2022  Volume 3, Issue 1, Page(s) 100392

    Abstract: Machine learning has traditionally operated in a space where data and labels are assumed to be anchored in objective truths. Unfortunately, much evidence suggests that the "embodied" data acquired from and about human bodies does not create systems that ... ...

    Abstract Machine learning has traditionally operated in a space where data and labels are assumed to be anchored in objective truths. Unfortunately, much evidence suggests that the "embodied" data acquired from and about human bodies does not create systems that function as desired. The complexity of health care data can be linked to a long history of discrimination, and research in this space forbids naive applications. To improve health care, machine learning models must strive to recognize, reduce, or remove such biases from the start. We aim to enumerate many examples to demonstrate the depth and breadth of biases that exist and that have been present throughout the history of medicine. We hope that outrage over algorithms automating biases will lead to changes in the underlying practices that generated such data, leading to reduced health disparities.
    Language English
    Publishing date 2022-01-14
    Publishing country United States
    Document type Journal Article ; Review
    ISSN 2666-3899
    ISSN (online) 2666-3899
    DOI 10.1016/j.patter.2021.100392
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Considering Biased Data as Informative Artifacts in AI-Assisted Health Care.

    Ferryman, Kadija / Mackintosh, Maxine / Ghassemi, Marzyeh

    The New England journal of medicine

    2023  Volume 389, Issue 9, Page(s) 833–838

    MeSH term(s) Humans ; Artifacts ; Artificial Intelligence ; Delivery of Health Care/statistics & numerical data ; Bias ; Data Interpretation, Statistical
    Language English
    Publishing date 2023-08-28
    Publishing country United States
    Document type Journal Article ; Review
    ZDB-ID 207154-x
    ISSN 1533-4406 ; 0028-4793
    ISSN (online) 1533-4406
    ISSN 0028-4793
    DOI 10.1056/NEJMra2214964
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Informative Artifacts in AI-Assisted Care. Reply.

    Ferryman, Kadija / Macintosh, Maxine / Ghassemi, Marzyeh

    The New England journal of medicine

    2023  Volume 389, Issue 22, Page(s) 2114–2115

    MeSH term(s) Humans ; Artifacts ; Artificial Intelligence
    Language English
    Publishing date 2023-12-04
    Publishing country United States
    Document type Letter ; Comment
    ZDB-ID 207154-x
    ISSN 1533-4406 ; 0028-4793
    ISSN (online) 1533-4406
    ISSN 0028-4793
    DOI 10.1056/NEJMc2311525
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Risk Sensitive Dead-end Identification in Safety-Critical Offline Reinforcement Learning

    Killian, Taylor W. / Parbhoo, Sonali / Ghassemi, Marzyeh

    2023  

    Abstract: In safety-critical decision-making scenarios being able to identify worst-case outcomes, or dead-ends is crucial in order to develop safe and reliable policies in practice. These situations are typically rife with uncertainty due to unknown or stochastic ...

    Abstract In safety-critical decision-making scenarios being able to identify worst-case outcomes, or dead-ends is crucial in order to develop safe and reliable policies in practice. These situations are typically rife with uncertainty due to unknown or stochastic characteristics of the environment as well as limited offline training data. As a result, the value of a decision at any time point should be based on the distribution of its anticipated effects. We propose a framework to identify worst-case decision points, by explicitly estimating distributions of the expected return of a decision. These estimates enable earlier indication of dead-ends in a manner that is tunable based on the risk tolerance of the designed task. We demonstrate the utility of Distributional Dead-end Discovery (DistDeD) in a toy domain as well as when assessing the risk of severely ill patients in the intensive care unit reaching a point where death is unavoidable. We find that DistDeD significantly improves over prior discovery approaches, providing indications of the risk 10 hours earlier on average as well as increasing detection by 20%.

    Comment: To appear in TMLR (01/2023). The submission and reviews can be viewed at: https://openreview.net/forum?id=oKlEOT83gI
    Keywords Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2023-01-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Change is Hard

    Yang, Yuzhe / Zhang, Haoran / Katabi, Dina / Ghassemi, Marzyeh

    A Closer Look at Subpopulation Shift

    2023  

    Abstract: Machine learning models often perform poorly on subgroups that are underrepresented in the training data. Yet, little is understood on the variation in mechanisms that cause subpopulation shifts, and how algorithms generalize across such diverse shifts ... ...

    Abstract Machine learning models often perform poorly on subgroups that are underrepresented in the training data. Yet, little is understood on the variation in mechanisms that cause subpopulation shifts, and how algorithms generalize across such diverse shifts at scale. In this work, we provide a fine-grained analysis of subpopulation shift. We first propose a unified framework that dissects and explains common shifts in subgroups. We then establish a comprehensive benchmark of 20 state-of-the-art algorithms evaluated on 12 real-world datasets in vision, language, and healthcare domains. With results obtained from training over 10,000 models, we reveal intriguing observations for future progress in this space. First, existing algorithms only improve subgroup robustness over certain types of shifts but not others. Moreover, while current algorithms rely on group-annotated validation data for model selection, we find that a simple selection criterion based on worst-class accuracy is surprisingly effective even without any group information. Finally, unlike existing works that solely aim to improve worst-group accuracy (WGA), we demonstrate the fundamental tradeoff between WGA and other important metrics, highlighting the need to carefully choose testing metrics. Code and data are available at: https://github.com/YyzHarry/SubpopBench.

    Comment: ICML 2023
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-02-23
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Deep Metric Learning for the Hemodynamics Inference with Electrocardiogram Signals

    Jeong, Hyewon / Stultz, Collin M. / Ghassemi, Marzyeh

    2023  

    Abstract: Heart failure is a debilitating condition that affects millions of people worldwide and has a significant impact on their quality of life and mortality rates. An objective assessment of cardiac pressures remains an important method for the diagnosis and ... ...

    Abstract Heart failure is a debilitating condition that affects millions of people worldwide and has a significant impact on their quality of life and mortality rates. An objective assessment of cardiac pressures remains an important method for the diagnosis and treatment prognostication for patients with heart failure. Although cardiac catheterization is the gold standard for estimating central hemodynamic pressures, it is an invasive procedure that carries inherent risks, making it a potentially dangerous procedure for some patients. Approaches that leverage non-invasive signals - such as electrocardiogram (ECG) - have the promise to make the routine estimation of cardiac pressures feasible in both inpatient and outpatient settings. Prior models trained to estimate intracardiac pressures (e.g., mean pulmonary capillary wedge pressure (mPCWP)) in a supervised fashion have shown good discriminatory ability but have been limited to the labeled dataset from the heart failure cohort. To address this issue and build a robust representation, we apply deep metric learning (DML) and propose a novel self-supervised DML with distance-based mining that improves the performance of a model with limited labels. We use a dataset that contains over 5.4 million ECGs without concomitant central pressure labels to pre-train a self-supervised DML model which showed improved classification of elevated mPCWP compared to self-supervised contrastive baselines. Additionally, the supervised DML model that uses ECGs with access to 8,172 mPCWP labels demonstrated significantly better performance on the mPCWP regression task compared to the supervised baseline. Moreover, our data suggest that DML yields models that are performant across patient subgroups, even when some patient subgroups are under-represented in the dataset. Our code is available at https://github.com/mandiehyewon/ssldml
    Keywords Computer Science - Machine Learning ; Electrical Engineering and Systems Science - Signal Processing ; Quantitative Biology - Quantitative Methods
    Subject code 006
    Publishing date 2023-08-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: Mitigating the impact of biased artificial intelligence in emergency decision-making.

    Adam, Hammaad / Balagopalan, Aparna / Alsentzer, Emily / Christia, Fotini / Ghassemi, Marzyeh

    Communications medicine

    2022  Volume 2, Issue 1, Page(s) 149

    Abstract: Background: Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such ...

    Abstract Background: Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine.
    Methods: In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags.
    Results: Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making.
    Conclusions: Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.
    Language English
    Publishing date 2022-11-21
    Publishing country England
    Document type Journal Article
    ISSN 2730-664X
    ISSN (online) 2730-664X
    DOI 10.1038/s43856-022-00214-4
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: The false hope of current approaches to explainable artificial intelligence in health care.

    Ghassemi, Marzyeh / Oakden-Rayner, Luke / Beam, Andrew L

    The Lancet. Digital health

    2021  Volume 3, Issue 11, Page(s) e745–e750

    Abstract: The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care ... ...

    Abstract The black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine. It has been argued that explainable AI will engender trust with the health-care workforce, provide transparency into the AI decision making process, and potentially mitigate various kinds of bias. In this Viewpoint, we argue that this argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support. We provide an overview of current explainability techniques and highlight how various failure cases can cause problems for decision making for individual patients. In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.
    MeSH term(s) Artificial Intelligence ; Bias ; Communication ; Comprehension ; Decision Making ; Delivery of Health Care/methods ; Diagnostic Imaging ; Dissent and Disputes ; Health Personnel ; Humans ; Models, Biological ; Trust
    Language English
    Publishing date 2021-10-28
    Publishing country England
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Review
    ISSN 2589-7500
    ISSN (online) 2589-7500
    DOI 10.1016/S2589-7500(21)00208-9
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top