LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 147

Search options

  1. Article ; Online: Virtual-first care: Opportunities and challenges for the future of diagnostic reasoning.

    Lawrence, Katharine / Mann, Devin

    The clinical teacher

    2024  , Page(s) e13720

    Language English
    Publishing date 2024-01-14
    Publishing country England
    Document type Journal Article
    ZDB-ID 2151518-9
    ISSN 1743-498X ; 1743-4971
    ISSN (online) 1743-498X
    ISSN 1743-4971
    DOI 10.1111/tct.13720
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Reimagining Connected Care in the Era of Digital Medicine.

    Mann, Devin M / Lawrence, Katharine

    JMIR mHealth and uHealth

    2022  Volume 10, Issue 4, Page(s) e34483

    Abstract: The COVID-19 pandemic accelerated the adoption of remote patient monitoring technology, which offers exciting opportunities for expanded connected care at a distance. However, while the mode of clinicians' interactions with patients and their health data ...

    Abstract The COVID-19 pandemic accelerated the adoption of remote patient monitoring technology, which offers exciting opportunities for expanded connected care at a distance. However, while the mode of clinicians' interactions with patients and their health data has transformed, the larger framework of how we deliver care is still driven by a model of episodic care that does not facilitate this new frontier. Fully realizing a transformation to a system of continuous connected care augmented by remote monitoring technology will require a shift in clinicians' and health systems' approach to care delivery technology and its associated data volume and complexity. In this article, we present a solution that organizes and optimizes the interaction of automated technologies with human oversight, allowing for the maximal use of data-rich tools while preserving the pieces of medical care considered uniquely human. We review implications of this "augmented continuous connected care" model of remote patient monitoring for clinical practice and offer human-centered design-informed next steps to encourage innovation around these important issues.
    MeSH term(s) COVID-19 ; Delivery of Health Care ; Government Programs ; Humans ; Pandemics ; Telemedicine
    Language English
    Publishing date 2022-04-15
    Publishing country Canada
    Document type Journal Article
    ZDB-ID 2719220-9
    ISSN 2291-5222 ; 2291-5222
    ISSN (online) 2291-5222
    ISSN 2291-5222
    DOI 10.2196/34483
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article: Putting ChatGPT's Medical Advice to the (Turing) Test: Survey Study.

    Nov, Oded / Singh, Nina / Mann, Devin

    JMIR medical education

    2023  Volume 9, Page(s) e46939

    Abstract: Background: Chatbots are being piloted to draft responses to patient questions, but patients' ability to distinguish between provider and chatbot responses and patients' trust in chatbots' functions are not well established.: Objective: This study ... ...

    Abstract Background: Chatbots are being piloted to draft responses to patient questions, but patients' ability to distinguish between provider and chatbot responses and patients' trust in chatbots' functions are not well established.
    Objective: This study aimed to assess the feasibility of using ChatGPT (Chat Generative Pre-trained Transformer) or a similar artificial intelligence-based chatbot for patient-provider communication.
    Methods: A survey study was conducted in January 2023. Ten representative, nonadministrative patient-provider interactions were extracted from the electronic health record. Patients' questions were entered into ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient question was followed by a provider- or ChatGPT-generated response. Participants were informed that 5 responses were provider generated and 5 were chatbot generated. Participants were asked-and incentivized financially-to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale from 1-5.
    Results: A US-representative sample of 430 study participants aged 18 and older were recruited on Prolific, a crowdsourcing platform for academic studies. In all, 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. Overall, 53.3% (209/392) of respondents analyzed were women, and the average age was 47.1 (range 18-91) years. The correct classification of responses ranged between 49% (192/392) to 85.7% (336/392) for different questions. On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score 3.4 out of 5), with lower trust as the health-related complexity of the task in the questions increased.
    Conclusions: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower-risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in health care.
    Language English
    Publishing date 2023-07-10
    Publishing country Canada
    Document type Journal Article
    ISSN 2369-3762
    ISSN 2369-3762
    DOI 10.2196/46939
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Quantifying the impact of telemedicine and patient medical advice request messages on physicians' work-outside-work.

    Mandal, Soumik / Wiesenfeld, Batia M / Mann, Devin M / Szerencsy, Adam C / Iturrate, Eduardo / Nov, Oded

    NPJ digital medicine

    2024  Volume 7, Issue 1, Page(s) 35

    Abstract: The COVID-19 pandemic has boosted digital health utilization, raising concerns about increased physicians' after-hours clinical work ("work-outside-work"). The surge in patients' digital messages and additional time spent on work-outside-work by ... ...

    Abstract The COVID-19 pandemic has boosted digital health utilization, raising concerns about increased physicians' after-hours clinical work ("work-outside-work"). The surge in patients' digital messages and additional time spent on work-outside-work by telemedicine providers underscores the need to evaluate the connection between digital health utilization and physicians' after-hours commitments. We examined the impact on physicians' workload from two types of digital demands - patients' messages requesting medical advice (PMARs) sent to physicians' inbox (inbasket), and telemedicine. Our study included 1716 ambulatory-care physicians in New York City regularly practicing between November 2022 and March 2023. Regression analyses assessed primary and interaction effects of (PMARs) and telemedicine on work-outside-work. The study revealed a significant effect of PMARs on physicians' work-outside-work and that this relationship is moderated by physicians' specialties. Non-primary care physicians or specialists experienced a more pronounced effect than their primary care peers. Analysis of their telemedicine load revealed that primary care physicians received fewer PMARs and spent less time in work-outside-work with more telemedicine. Specialists faced increased PMARs and did more work-outside-work as telemedicine visits increased which could be due to the difference in patient panels. Reducing PMAR volumes and efficient inbasket management strategies needed to reduce physicians' work-outside-work. Policymakers need to be cognizant of potential disruptions in physicians carefully balanced workload caused by the digital health services.
    Language English
    Publishing date 2024-02-14
    Publishing country England
    Document type Journal Article
    ISSN 2398-6352
    ISSN (online) 2398-6352
    DOI 10.1038/s41746-024-01001-2
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Mixed methods assessment of the influence of demographics on medical advice of ChatGPT.

    Andreadis, Katerina / Newman, Devon R / Twan, Chelsea / Shunk, Amelia / Mann, Devin M / Stevens, Elizabeth R

    Journal of the American Medical Informatics Association : JAMIA

    2024  

    Abstract: Objectives: To evaluate demographic biases in diagnostic accuracy and health advice between generative artificial intelligence (AI) (ChatGPT GPT-4) and traditional symptom checkers like WebMD.: Materials and methods: Combination symptom and ... ...

    Abstract Objectives: To evaluate demographic biases in diagnostic accuracy and health advice between generative artificial intelligence (AI) (ChatGPT GPT-4) and traditional symptom checkers like WebMD.
    Materials and methods: Combination symptom and demographic vignettes were developed for 27 most common symptom complaints. Standardized prompts, written from a patient perspective, with varying demographic permutations of age, sex, and race/ethnicity were entered into ChatGPT (GPT-4) between July and August 2023. In total, 3 runs of 540 ChatGPT prompts were compared to the corresponding WebMD Symptom Checker output using a mixed-methods approach. In addition to diagnostic correctness, the associated text generated by ChatGPT was analyzed for readability (using Flesch-Kincaid Grade Level) and qualitative aspects like disclaimers and demographic tailoring.
    Results: ChatGPT matched WebMD in 91% of diagnoses, with a 24% top diagnosis match rate. Diagnostic accuracy was not significantly different across demographic groups, including age, race/ethnicity, and sex. ChatGPT's urgent care recommendations and demographic tailoring were presented significantly more to 75-year-olds versus 25-year-olds (P < .01) but were not statistically different among race/ethnicity and sex groups. The GPT text was suitable for college students, with no significant demographic variability.
    Discussion: The use of non-health-tailored generative AI, like ChatGPT, for simple symptom-checking functions provides comparable diagnostic accuracy to commercially available symptom checkers and does not demonstrate significant demographic bias in this setting. The text accompanying differential diagnoses, however, suggests demographic tailoring that could potentially introduce bias.
    Conclusion: These results highlight the need for continued rigorous evaluation of AI-driven medical platforms, focusing on demographic biases to ensure equitable care.
    Language English
    Publishing date 2024-04-29
    Publishing country England
    Document type Journal Article
    ZDB-ID 1205156-1
    ISSN 1527-974X ; 1067-5027
    ISSN (online) 1527-974X
    ISSN 1067-5027
    DOI 10.1093/jamia/ocae086
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Centering health equity in large language model deployment.

    Singh, Nina / Lawrence, Katharine / Richardson, Safiya / Mann, Devin M

    PLOS digital health

    2023  Volume 2, Issue 10, Page(s) e0000367

    Language English
    Publishing date 2023-10-24
    Publishing country United States
    Document type Journal Article
    ISSN 2767-3170
    ISSN (online) 2767-3170
    DOI 10.1371/journal.pdig.0000367
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Digital Minimalism - An Rx for Clinician Burnout.

    Singh, Nina / Lawrence, Katharine / Sinsky, Christine / Mann, Devin M

    The New England journal of medicine

    2023  Volume 388, Issue 13, Page(s) 1158–1159

    MeSH term(s) Humans ; Burnout, Professional ; Burnout, Psychological ; Digital Technology
    Language English
    Publishing date 2023-03-25
    Publishing country United States
    Document type Journal Article
    ZDB-ID 207154-x
    ISSN 1533-4406 ; 0028-4793
    ISSN (online) 1533-4406
    ISSN 0028-4793
    DOI 10.1056/NEJMp2215297
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Book ; Online: Putting ChatGPT's Medical Advice to the (Turing) Test

    Nov, Oded / Singh, Nina / Mann, Devin

    2023  

    Abstract: Objective: Assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Participants: A US representative sample of 430 study participants aged 18 and above. 53.2% of respondents analyzed were women; their ... ...

    Abstract Objective: Assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Participants: A US representative sample of 430 study participants aged 18 and above. 53.2% of respondents analyzed were women; their average age was 47.1. Exposure: Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients' questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient's question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale of 1-5. Results: The correct classification of responses ranged between 49.0% to 85.7% for different questions. On average, chatbot responses were correctly identified 65.5% of the time, and provider responses were correctly distinguished 65.1% of the time. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions.
    Keywords Computer Science - Human-Computer Interaction
    Subject code 150
    Publishing date 2023-01-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: Leveraging Generative AI Tools to Support the Development of Digital Solutions in Health Care Research: Case Study.

    Rodriguez, Danissa V / Lawrence, Katharine / Gonzalez, Javier / Brandfield-Harvey, Beatrix / Xu, Lynn / Tasneem, Sumaiya / Levine, Defne L / Mann, Devin

    JMIR human factors

    2024  Volume 11, Page(s) e52885

    Abstract: Background: Generative artificial intelligence has the potential to revolutionize health technology product development by improving coding quality, efficiency, documentation, quality assessment and review, and troubleshooting.: Objective: This paper ...

    Abstract Background: Generative artificial intelligence has the potential to revolutionize health technology product development by improving coding quality, efficiency, documentation, quality assessment and review, and troubleshooting.
    Objective: This paper explores the application of a commercially available generative artificial intelligence tool (ChatGPT) to the development of a digital health behavior change intervention designed to support patient engagement in a commercial digital diabetes prevention program.
    Methods: We examined the capacity, advantages, and limitations of ChatGPT to support digital product idea conceptualization, intervention content development, and the software engineering process, including software requirement generation, software design, and code production. In total, 11 evaluators, each with at least 10 years of experience in fields of study ranging from medicine and implementation science to computer science, participated in the output review process (ChatGPT vs human-generated output). All had familiarity or prior exposure to the original personalized automatic messaging system intervention. The evaluators rated the ChatGPT-produced outputs in terms of understandability, usability, novelty, relevance, completeness, and efficiency.
    Results: Most metrics received positive scores. We identified that ChatGPT can (1) support developers to achieve high-quality products faster and (2) facilitate nontechnical communication and system understanding between technical and nontechnical team members around the development goal of rapid and easy-to-build computational solutions for medical technologies.
    Conclusions: ChatGPT can serve as a usable facilitator for researchers engaging in the software development life cycle, from product conceptualization to feature identification and user story development to code generation.
    Trial registration: ClinicalTrials.gov NCT04049500; https://clinicaltrials.gov/ct2/show/NCT04049500.
    MeSH term(s) Humans ; Artificial Intelligence ; Benchmarking ; Biomedical Technology ; Health Services Research ; Software
    Language English
    Publishing date 2024-03-06
    Publishing country Canada
    Document type Clinical Study ; Journal Article
    ISSN 2292-9495
    ISSN (online) 2292-9495
    DOI 10.2196/52885
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: A framework for digital health equity.

    Richardson, Safiya / Lawrence, Katharine / Schoenthaler, Antoinette M / Mann, Devin

    NPJ digital medicine

    2022  Volume 5, Issue 1, Page(s) 119

    Abstract: We present a comprehensive Framework for Digital Health Equity, detailing key digital determinants of health (DDoH), to support the work of digital health tool creators in industry, health systems operations, and academia. The rapid digitization of ... ...

    Abstract We present a comprehensive Framework for Digital Health Equity, detailing key digital determinants of health (DDoH), to support the work of digital health tool creators in industry, health systems operations, and academia. The rapid digitization of healthcare may widen health disparities if solutions are not developed with these determinants in mind. Our framework builds on the leading health disparities framework, incorporating a digital environment domain. We examine DDoHs at the individual, interpersonal, community, and societal levels, discuss the importance of a root cause, multi-level approach, and offer a pragmatic case study that applies our framework.
    Language English
    Publishing date 2022-08-18
    Publishing country England
    Document type Journal Article ; Review
    ISSN 2398-6352
    ISSN (online) 2398-6352
    DOI 10.1038/s41746-022-00663-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top