LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Rao, Arya"
  2. AU="Wong, Gary K W"

Suchergebnis

Treffer 1 - 10 von insgesamt 12

Suchoptionen

  1. Artikel ; Online: Reply.

    Rao, Arya / Dreyer, Keith J / Succi, Marc D

    Journal of the American College of Radiology : JACR

    2023  Band 21, Heft 2, Seite(n) 225–226

    Sprache Englisch
    Erscheinungsdatum 2023-09-01
    Erscheinungsland United States
    Dokumenttyp Letter
    ZDB-ID 2274861-1
    ISSN 1558-349X ; 1546-1440
    ISSN (online) 1558-349X
    ISSN 1546-1440
    DOI 10.1016/j.jacr.2023.08.026
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Artikel: Evaluating ChatGPT as an Adjunct for Radiologic Decision-Making.

    Rao, Arya / Kim, John / Kamineni, Meghana / Pang, Michael / Lie, Winston / Succi, Marc D

    medRxiv : the preprint server for health sciences

    2023  

    Abstract: Background: ChatGPT, a popular new large language model (LLM) built by OpenAI, has shown impressive performance in a number of specialized applications. Despite the rising popularity and performance of AI, studies evaluating the use of LLMs for clinical ...

    Abstract Background: ChatGPT, a popular new large language model (LLM) built by OpenAI, has shown impressive performance in a number of specialized applications. Despite the rising popularity and performance of AI, studies evaluating the use of LLMs for clinical decision support are lacking.
    Purpose: To evaluate ChatGPT's capacity for clinical decision support in radiology via the identification of appropriate imaging services for two important clinical presentations: breast cancer screening and breast pain.
    Materials and methods: We compared ChatGPT's responses to the American College of Radiology (ACR) Appropriateness Criteria for breast pain and breast cancer screening. Our prompt formats included an open-ended (OE) format, where ChatGPT was asked to provide the single most appropriate imaging procedure, and a select all that apply (SATA) format, where ChatGPT was given a list of imaging modalities to assess. Scoring criteria evaluated whether proposed imaging modalities were in accordance with ACR guidelines.
    Results: ChatGPT achieved an average OE score of 1.83 (out of 2) and a SATA average percentage correct of 88.9% for breast cancer screening prompts, and an average OE score of 1.125 (out of 2) and a SATA average percentage correct of 58.3% for breast pain prompts.
    Conclusion: Our results demonstrate the feasibility of using ChatGPT for radiologic decision making, with the potential to improve clinical workflow and responsible use of radiology services.
    Sprache Englisch
    Erscheinungsdatum 2023-02-07
    Erscheinungsland United States
    Dokumenttyp Preprint
    DOI 10.1101/2023.02.02.23285399
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Artikel ; Online: Empathy and Equity: Key Considerations for Large Language Model Adoption in Health Care.

    Koranteng, Erica / Rao, Arya / Flores, Efren / Lev, Michael / Landman, Adam / Dreyer, Keith / Succi, Marc

    JMIR medical education

    2023  Band 9, Seite(n) e51199

    Abstract: The growing presence of large language models (LLMs) in health care applications holds significant promise for innovative advancements in patient care. However, concerns about ethical implications and potential biases have been raised by various ... ...

    Abstract The growing presence of large language models (LLMs) in health care applications holds significant promise for innovative advancements in patient care. However, concerns about ethical implications and potential biases have been raised by various stakeholders. Here, we evaluate the ethics of LLMs in medicine along 2 key axes: empathy and equity. We outline the importance of these factors in novel models of care and develop frameworks for addressing these alongside LLM deployment.
    Mesh-Begriff(e) Humans ; Empathy ; Health Facilities ; Language ; Medicine ; Delivery of Health Care
    Sprache Englisch
    Erscheinungsdatum 2023-12-28
    Erscheinungsland Canada
    Dokumenttyp Journal Article
    ISSN 2369-3762
    ISSN (online) 2369-3762
    DOI 10.2196/51199
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  4. Artikel ; Online: Proactive Polypharmacy Management Using Large Language Models: Opportunities to Enhance Geriatric Care.

    Rao, Arya / Kim, John / Lie, Winston / Pang, Michael / Fuh, Lanting / Dreyer, Keith J / Succi, Marc D

    Journal of medical systems

    2024  Band 48, Heft 1, Seite(n) 41

    Abstract: Polypharmacy remains an important challenge for patients with extensive medical complexity. Given the primary care shortage and the increasing aging population, effective polypharmacy management is crucial to manage the increasing burden of care. The ... ...

    Abstract Polypharmacy remains an important challenge for patients with extensive medical complexity. Given the primary care shortage and the increasing aging population, effective polypharmacy management is crucial to manage the increasing burden of care. The capacity of large language model (LLM)-based artificial intelligence to aid in polypharmacy management has yet to be evaluated. Here, we evaluate ChatGPT's performance in polypharmacy management via its deprescribing decisions in standardized clinical vignettes. We inputted several clinical vignettes originally from a study of general practicioners' deprescribing decisions into ChatGPT 3.5, a publicly available LLM, and evaluated its capacity for yes/no binary deprescribing decisions as well as list-based prompts in which the model was prompted to choose which of several medications to deprescribe. We recorded ChatGPT responses to yes/no binary deprescribing prompts and the number and types of medications deprescribed. In yes/no binary deprescribing decisions, ChatGPT universally recommended deprescribing medications regardless of ADL status in patients with no overlying CVD history; in patients with CVD history, ChatGPT's answers varied by technical replicate. Total number of medications deprescribed ranged from 2.67 to 3.67 (out of 7) and did not vary with CVD status, but increased linearly with severity of ADL impairment. Among medication types, ChatGPT preferentially deprescribed pain medications. ChatGPT's deprescribing decisions vary along the axes of ADL status, CVD history, and medication type, indicating some concordance of internal logic between general practitioners and the model. These results indicate that specifically trained LLMs may provide useful clinical support in polypharmacy management for primary care physicians.
    Mesh-Begriff(e) Humans ; Aged ; Deprescriptions ; Polypharmacy ; Artificial Intelligence ; General Practitioners ; Cardiovascular Diseases
    Sprache Englisch
    Erscheinungsdatum 2024-04-18
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 423488-1
    ISSN 1573-689X ; 0148-5598
    ISSN (online) 1573-689X
    ISSN 0148-5598
    DOI 10.1007/s10916-024-02058-y
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  5. Artikel ; Online: Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study.

    Rao, Arya / Pang, Michael / Kim, John / Kamineni, Meghana / Lie, Winston / Prasad, Anoop K / Landman, Adam / Dreyer, Keith / Succi, Marc D

    Journal of medical Internet research

    2023  Band 25, Seite(n) e48659

    Abstract: Background: Large language model (LLM)-based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive ... ...

    Abstract Background: Large language model (LLM)-based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated.
    Objective: This study aimed to evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes.
    Methods: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT's performance on clinical tasks.
    Results: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=-15.8%; P<.001) and clinical management (β=-7.4%; P=.02) question types.
    Conclusions: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT's training data set.
    Mesh-Begriff(e) Humans ; Artificial Intelligence ; Clinical Decision-Making ; Organizations ; Workflow ; User-Centered Design
    Sprache Englisch
    Erscheinungsdatum 2023-08-22
    Erscheinungsland Canada
    Dokumenttyp Evaluation Study ; Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2028830-X
    ISSN 1438-8871 ; 1438-8871
    ISSN (online) 1438-8871
    ISSN 1438-8871
    DOI 10.2196/48659
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  6. Artikel: Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow.

    Rao, Arya / Pang, Michael / Kim, John / Kamineni, Meghana / Lie, Winston / Prasad, Anoop K / Landman, Adam / Dreyer, Keith J / Succi, Marc D

    medRxiv : the preprint server for health sciences

    2023  

    Abstract: Importance: Large language model (LLM) artificial intelligence (AI) chatbots direct the power of large training datasets towards successive, related tasks, as opposed to single-ask tasks, for which AI already achieves impressive performance. The ... ...

    Abstract Importance: Large language model (LLM) artificial intelligence (AI) chatbots direct the power of large training datasets towards successive, related tasks, as opposed to single-ask tasks, for which AI already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as virtual physicians, has not yet been evaluated.
    Objective: To evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes.
    Design: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity.
    Setting: ChatGPT, a publicly available LLM.
    Participants: Clinical vignettes featured hypothetical patients with a variety of age and gender identities, and a range of Emergency Severity Indices (ESIs) based on initial clinical presentation.
    Exposures: MSD Clinical Manual vignettes.
    Main outcomes and measures: We measured the proportion of correct responses to the questions posed within the clinical vignettes tested.
    Results: ChatGPT achieved 71.7% (95% CI, 69.3% to 74.1%) accuracy overall across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI, 67.8% to 86.1%), and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI, 54.2% to 66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=-15.8%, p<0.001) and clinical management (β=-7.4%, p=0.02) type questions.
    Conclusions and relevance: ChatGPT achieves impressive accuracy in clinical decision making, with particular strengths emerging as it has more clinical information at its disposal.
    Sprache Englisch
    Erscheinungsdatum 2023-02-26
    Erscheinungsland United States
    Dokumenttyp Preprint
    DOI 10.1101/2023.02.21.23285886
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  7. Artikel ; Online: Correction: Do peri‑operative parathyroid hormone (PTH) analogues improve bone density and decrease mechanical complications in spinal deformity correction?-a minimum 2‑year radiological study measuring Hounsfield units.

    Chung, Andrew / Robinson, Jerry / Gendelberg, David / Jimenez, Jose / Anand, Anita / Rao, Arya / Khandehroo, Bardia / Khandehroo, Babak / Kahwaty, Sheila / Anand, Neel

    European spine journal : official publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society

    2023  Band 33, Heft 1, Seite(n) 367

    Sprache Englisch
    Erscheinungsdatum 2023-10-27
    Erscheinungsland Germany
    Dokumenttyp Published Erratum
    ZDB-ID 1115375-1
    ISSN 1432-0932 ; 0940-6719
    ISSN (online) 1432-0932
    ISSN 0940-6719
    DOI 10.1007/s00586-023-08000-z
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: Do peri-operative parathyroid hormone (PTH) analogues improve bone density and decrease mechanical complications in spinal deformity correction?-a minimum 2-year radiological study measuring Hounsfield units.

    Chung, Andrew / Robinson, Jerry / Gendelberg, David / Jimenez, Jose / Anand, Anita / Rao, Arya / Khandehroo, Bardia / Khandehroo, Babak / Kahwaty, Sheila / Anand, Neel

    European spine journal : official publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society

    2023  Band 32, Heft 10, Seite(n) 3651–3658

    Abstract: Objective: To delineate whether use of a PTH analogue in the 1-year peri-operative period improves lumbar bone density.: Methods: A prospectively collected data registry of 254 patients who underwent CMIS correction of ASD (Cobb angle > 20 or SVA > ... ...

    Abstract Objective: To delineate whether use of a PTH analogue in the 1-year peri-operative period improves lumbar bone density.
    Methods: A prospectively collected data registry of 254 patients who underwent CMIS correction of ASD (Cobb angle > 20 or SVA > 50 mm or (PI-LL) > 10) from Jan 2011 to Jan 2020 was analysed. Patients who were placed on PTH analogues for one year in conjunction with surgery were included in the study. Ultimately, 41 patients who had pre- and two-year post-operative CT scans for review were included in this study. Hounsfield units were measured off of the L1-L3 levels for all patients before and after surgery on pre-op and post-op CT scans.
    Result: The mean age of patients in this study was 70 (52-84, SD 7). Mean follow-up was 66 (24-132, SD 33) months. Twenty-three patients met criteria for severe deformity (Cobb angle > 50 degrees or SVA > 95 mm or PI/LL mismatch > 20 or PT > 30). Based off 2-year post-op CT scan, there were significant improvements in L1 Hounsfield units when comparing pre-op values (96; SD 55) to post-op values (185 SD 102); p. < 0.05. There was no screw loosening or screw pull out. There were 2 patients with PJF (4.8%). Both these patients had not completed their PTH treatment: one only took PTH for 3 months (PJF at 2-year post-op) and the other one took it only for 1 month (PJF at 1-year post-op). No increase in bone density was noted (based off of Hounsfield units) in five patients (12%) despite completion of their PTH therapy. Only one patient experienced nausea from PTH therapy. There were no other PTH related adverse events.
    Conclusion: The incidence of PTH analogues failing to increase bone density in our series was low at 12%. This study shows that PTH analogues may be a powerful adjunct for increasing bone density and may help to mitigate the risk of mechanical complications in patients undergoing deformity correction with minimally invasive techniques. Future comparative studies are warranted to confirm these latter findings and to potentially protocolize the ideal peri-operative bone health optimization strategy.
    Mesh-Begriff(e) Humans ; Bone Density ; Treatment Outcome ; Retrospective Studies ; Spinal Fusion/methods ; Parathyroid Hormone ; Lordosis/surgery
    Chemische Substanzen Parathyroid Hormone
    Sprache Englisch
    Erscheinungsdatum 2023-08-08
    Erscheinungsland Germany
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1115375-1
    ISSN 1432-0932 ; 0940-6719
    ISSN (online) 1432-0932
    ISSN 0940-6719
    DOI 10.1007/s00586-023-07859-2
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  9. Artikel ; Online: Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot.

    Rao, Arya / Kim, John / Kamineni, Meghana / Pang, Michael / Lie, Winston / Dreyer, Keith J / Succi, Marc D

    Journal of the American College of Radiology : JACR

    2023  Band 20, Heft 10, Seite(n) 990–997

    Abstract: Objective: Despite rising popularity and performance, studies evaluating the use of large language models for clinical decision support are lacking. Here, we evaluate ChatGPT (Generative Pre-trained Transformer)-3.5 and GPT-4's (OpenAI, San Francisco, ... ...

    Abstract Objective: Despite rising popularity and performance, studies evaluating the use of large language models for clinical decision support are lacking. Here, we evaluate ChatGPT (Generative Pre-trained Transformer)-3.5 and GPT-4's (OpenAI, San Francisco, California) capacity for clinical decision support in radiology via the identification of appropriate imaging services for two important clinical presentations: breast cancer screening and breast pain.
    Methods: We compared ChatGPT's responses to the ACR Appropriateness Criteria for breast pain and breast cancer screening. Our prompt formats included an open-ended (OE) and a select all that apply (SATA) format. Scoring criteria evaluated whether proposed imaging modalities were in accordance with ACR guidelines. Three replicate entries were conducted for each prompt, and the average of these was used to determine final scores.
    Results: Both ChatGPT-3.5 and ChatGPT-4 achieved an average OE score of 1.830 (out of 2) for breast cancer screening prompts. ChatGPT-3.5 achieved a SATA average percentage correct of 88.9%, compared with ChatGPT-4's average percentage correct of 98.4% for breast cancer screening prompts. For breast pain, ChatGPT-3.5 achieved an average OE score of 1.125 (out of 2) and a SATA average percentage correct of 58.3%, as compared with an average OE score of 1.666 (out of 2) and a SATA average percentage correct of 77.7%.
    Discussion: Our results demonstrate the eventual feasibility of using large language models like ChatGPT for radiologic decision making, with the potential to improve clinical workflow and responsible use of radiology services. More use cases and greater accuracy are necessary to evaluate and implement such tools.
    Mesh-Begriff(e) Humans ; Female ; Mastodynia ; Radiology ; Breast Neoplasms/diagnostic imaging ; Decision Making
    Chemische Substanzen N-hydroxysuccinimide S-acetylthioacetate (76931-93-6)
    Sprache Englisch
    Erscheinungsdatum 2023-06-21
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2274861-1
    ISSN 1558-349X ; 1546-1440
    ISSN (online) 1558-349X
    ISSN 1546-1440
    DOI 10.1016/j.jacr.2023.05.003
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  10. Artikel ; Online: Does the Global Alignment and Proportion score predict mechanical complications in circumferential minimally invasive surgery for adult spinal deformity?

    Gendelberg, David / Rao, Arya / Chung, Andrew / Jimenez-Almonte, Jose H / Anand, Anita / Robinson, Jerry / Khandehroo, Bardia / Khandehroo, Babak / Kahwaty, Sheila / Anand, Neel

    Neurosurgical focus

    2022  Band 54, Heft 1, Seite(n) E11

    Abstract: Objective: The Global Alignment and Proportion (GAP) score was developed to serve as a tool to predict mechanical complication probability in patients undergoing surgery for adult spinal deformity (ASD), serving as an aid for setting surgical goals to ... ...

    Abstract Objective: The Global Alignment and Proportion (GAP) score was developed to serve as a tool to predict mechanical complication probability in patients undergoing surgery for adult spinal deformity (ASD), serving as an aid for setting surgical goals to decrease the prevalence of mechanical complications in ASD surgery. However, it was developed using ASD patients for whom open surgical techniques were used for correction. Therefore, the purpose of this study was to assess the applicability of the score for patients undergoing circumferential minimally invasive surgery (cMIS) for correction of ASD.
    Methods: Study participants were patients undergoing cMIS ASD surgery without the use of osteotomies with a minimum of four levels fused and 2 years of follow-up. Postoperative GAP scores were calculated for all patients, and the association with mechanical failure was analyzed.
    Results: The authors identified 182 patients who underwent cMIS correction of ASD. Mechanical complications were found in 11.1% of patients with proportioned spinopelvic states, 20.5% of patients with moderately disproportioned spinopelvic states, and 18.8% of patients with severely disproportioned spinopelvic states. Analysis with a chi-square test showed a significant difference between the cMIS and original GAP study cohorts in the moderately disproportioned and severely disproportioned spinopelvic states, but not in the proportioned spinopelvic states.
    Conclusions: For patients stratified into proportioned, moderately disproportioned, and severely disproportioned spinopelvic states, the GAP score predicted 6%, 47%, and 95% mechanical complication rates, respectively. The mechanical complication rate in patients undergoing cMIS ASD correction did not correlate with the calculated GAP spinopelvic state.
    Mesh-Begriff(e) Humans ; Adult ; Retrospective Studies ; Spinal Fusion/adverse effects ; Spinal Fusion/methods ; Minimally Invasive Surgical Procedures/adverse effects ; Minimally Invasive Surgical Procedures/methods ; Osteotomy ; Postoperative Period ; Postoperative Complications/epidemiology ; Postoperative Complications/etiology
    Sprache Englisch
    Erscheinungsdatum 2022-12-31
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 2026589-X
    ISSN 1092-0684 ; 1092-0684
    ISSN (online) 1092-0684
    ISSN 1092-0684
    DOI 10.3171/2022.10.FOCUS22600
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang