LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 4 of total 4

Search options

  1. Article ; Online: Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI.

    Lang, Benjamin H

    Journal of medical ethics

    2022  Volume 48, Issue 4, Page(s) 234–235

    MeSH term(s) Artificial Intelligence ; Humans ; Physicians ; Referral and Consultation
    Language English
    Publishing date 2022-03-23
    Publishing country England
    Document type Journal Article
    ZDB-ID 194927-5
    ISSN 1473-4257 ; 0306-6800
    ISSN (online) 1473-4257
    ISSN 0306-6800
    DOI 10.1136/medethics-2022-108246
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Therapeutic Artificial Intelligence: Does Agential Status Matter?

    Hurley, Meghan E / Lang, Benjamin H / Smith, Jared N

    The American journal of bioethics : AJOB

    2023  Volume 23, Issue 5, Page(s) 33–35

    MeSH term(s) Humans ; Artificial Intelligence ; Psychotherapy
    Language English
    Publishing date 2023-05-02
    Publishing country United States
    Document type Journal Article ; Comment
    ZDB-ID 2060433-6
    ISSN 1536-0075 ; 1526-5161
    ISSN (online) 1536-0075
    ISSN 1526-5161
    DOI 10.1080/15265161.2023.2191037
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.

    Lang, Benjamin H / Nyholm, Sven / Blumenthal-Barby, Jennifer

    Digital society : ethics, socio-legal and governance of digital technology

    2023  Volume 2, Issue 3, Page(s) 52

    Abstract: As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be ... ...

    Abstract As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI's actions or influence on an outcome. So called "responsibility gaps" occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare administration is an industry ripe for responsibility gaps produced by these kinds of AI. The moral stakes of healthcare are often life and death, and the demand for reducing clinical uncertainty while standardizing care incentivizes the development and integration of AI diagnosticians and prognosticators. In this paper, we argue that (1) responsibility gaps
    Language English
    Publishing date 2023-11-16
    Publishing country Netherlands
    Document type Journal Article
    ISSN 2731-4669
    ISSN (online) 2731-4669
    DOI 10.1007/s44206-023-00073-z
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Trust criteria for artificial intelligence in health: normative and epistemic considerations.

    Kostick-Quenet, Kristin / Lang, Benjamin H / Smith, Jared / Hurley, Meghan / Blumenthal-Barby, Jennifer

    Journal of medical ethics

    2023  

    Abstract: Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly ...

    Abstract Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool's computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust criteria across stakeholders, settings, tools and use cases may influence approaches to using AI/ML tools in real settings. As part of a 5-year, multi-institutional Agency for Health Care Research and Quality-funded study, we identify trust criteria for a survival prediction algorithm intended to support clinical decision-making for left ventricular assist device therapy, using semistructured interviews (n=40) with patients and physicians, analysed via thematic analysis. Findings suggest that physicians and patients share similar empirical considerations for trust, which were primarily
    Language English
    Publishing date 2023-11-18
    Publishing country England
    Document type Journal Article
    ZDB-ID 194927-5
    ISSN 1473-4257 ; 0306-6800
    ISSN (online) 1473-4257
    ISSN 0306-6800
    DOI 10.1136/jme-2023-109338
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top