LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 13

Search options

  1. Article ; Online: Reanalysis of Trio Whole-Genome Sequencing Data Doubles the Yield in Autism Spectrum Disorder: De Novo Variants Present in Half.

    Bar, Omri / Vahey, Elizabeth / Mintz, Mark / Frye, Richard E / Boles, Richard G

    International journal of molecular sciences

    2024  Volume 25, Issue 2

    Abstract: Autism spectrum disorder (ASD) is a common condition with lifelong implications. The last decade has seen dramatic improvements in DNA sequencing and related bioinformatics and databases. We analyzed the raw DNA sequencing files on the ... ...

    Abstract Autism spectrum disorder (ASD) is a common condition with lifelong implications. The last decade has seen dramatic improvements in DNA sequencing and related bioinformatics and databases. We analyzed the raw DNA sequencing files on the Variantyx
    MeSH term(s) Humans ; Autism Spectrum Disorder/genetics ; Whole Genome Sequencing ; Sequence Analysis, DNA ; Autistic Disorder ; Computational Biology
    Language English
    Publishing date 2024-01-18
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2019364-6
    ISSN 1422-0067 ; 1422-0067 ; 1661-6596
    ISSN (online) 1422-0067
    ISSN 1422-0067 ; 1661-6596
    DOI 10.3390/ijms25021192
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Introducing surgical intelligence in gynecology: Automated identification of key steps in hysterectomy.

    Levin, Ishai / Rapoport Ferman, Judith / Bar, Omri / Ben Ayoun, Danielle / Cohen, Aviad / Wolf, Tamir

    International journal of gynaecology and obstetrics: the official organ of the International Federation of Gynaecology and Obstetrics

    2024  

    Abstract: Objective: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and ... ...

    Abstract Objective: The analysis of surgical videos using artificial intelligence holds great promise for the future of surgery by facilitating the development of surgical best practices, identifying key pitfalls, enhancing situational awareness, and disseminating that information via real-time, intraoperative decision-making. The objective of the present study was to examine the feasibility and accuracy of a novel computer vision algorithm for hysterectomy surgical step identification.
    Methods: This was a retrospective study conducted on surgical videos of laparoscopic hysterectomies performed in 277 patients in five medical centers. We used a surgical intelligence platform (Theator Inc.) that employs advanced computer vision and AI technology to automatically capture video data during surgery, deidentify, and upload procedures to a secure cloud infrastructure. Videos were manually annotated with sequential steps of surgery by a team of annotation specialists. Subsequently, a computer vision system was trained to perform automated step detection in hysterectomy. Analyzing automated video annotations in comparison to manual human annotations was used to determine accuracy.
    Results: The mean duration of the videos was 103 ± 43 min. Accuracy between AI-based predictions and manual human annotations was 93.1% on average. Accuracy was highest for the dissection and mobilization step (96.9%) and lowest for the adhesiolysis step (70.3%).
    Conclusion: The results of the present study demonstrate that a novel AI-based model achieves high accuracy for automated steps identification in hysterectomy. This lays the foundations for the next phase of AI, focused on real-time clinical decision support and prediction of outcome measures, to optimize surgeon workflow and elevate patient care.
    Language English
    Publishing date 2024-03-28
    Publishing country United States
    Document type Journal Article
    ZDB-ID 80149-5
    ISSN 1879-3479 ; 0020-7292
    ISSN (online) 1879-3479
    ISSN 0020-7292
    DOI 10.1002/ijgo.15490
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article: Whole exome/genome sequencing in cyclic vomiting syndrome reveals multiple candidate genes, suggesting a model of elevated intracellular cations and mitochondrial dysfunction.

    Bar, Omri / Ebenau, Laurie / Weiner, Kellee / Mintz, Mark / Boles, Richard G

    Frontiers in neurology

    2023  Volume 14, Page(s) 1151835

    Abstract: Objective: To utilize whole exome or genome sequencing and the scientific literature for identifying candidate genes for cyclic vomiting syndrome (CVS), an idiopathic migraine variant with paroxysmal nausea and vomiting.: Methods: A retrospective ... ...

    Abstract Objective: To utilize whole exome or genome sequencing and the scientific literature for identifying candidate genes for cyclic vomiting syndrome (CVS), an idiopathic migraine variant with paroxysmal nausea and vomiting.
    Methods: A retrospective chart review of 80 unrelated participants, ascertained by a quaternary care CVS specialist, was conducted. Genes associated with paroxysmal symptoms were identified querying the literature for genes associated with dominant cases of intermittent vomiting or both discomfort and disability; among which the raw genetic sequence was reviewed. "Qualifying" variants were defined as coding, rare, and conserved. Additionally, "Key Qualifying" variants were Pathogenic/Likely Pathogenic, or "Clinical" based upon the presence of a corresponding diagnosis. Candidate association to CVS was based on a point system.
    Results: Thirty-five paroxysmal genes were identified per the literature review. Among these, 12 genes were scored as "Highly likely" (
    Conclusion: All 22 CVS candidate genes are associated with either cation transport or energy metabolism (14 directly, 8 indirectly). Our findings suggest a cellular model in which aberrant ion gradients lead to mitochondrial dysfunction, or vice versa, in a pathogenic vicious cycle of cellular hyperexcitability. Among the non-paroxysmal genes identified, 5 are known causes of peripheral neuropathy. Our model is consistent with multiple current hypotheses of CVS.
    Language English
    Publishing date 2023-05-05
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2564214-5
    ISSN 1664-2295
    ISSN 1664-2295
    DOI 10.3389/fneur.2023.1151835
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: A novel high accuracy model for automatic surgical workflow recognition using artificial intelligence in laparoscopic totally extraperitoneal inguinal hernia repair (TEP).

    Ortenzi, Monica / Rapoport Ferman, Judith / Antolin, Alenka / Bar, Omri / Zohar, Maya / Perry, Ori / Asselmann, Dotan / Wolf, Tamir

    Surgical endoscopy

    2023  Volume 37, Issue 11, Page(s) 8818–8828

    Abstract: Introduction: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, ... ...

    Abstract Introduction: Artificial intelligence and computer vision are revolutionizing the way we perceive video analysis in minimally invasive surgery. This emerging technology has increasingly been leveraged successfully for video segmentation, documentation, education, and formative assessment. New, sophisticated platforms allow pre-determined segments chosen by surgeons to be automatically presented without the need to review entire videos. This study aimed to validate and demonstrate the accuracy of the first reported AI-based computer vision algorithm that automatically recognizes surgical steps in videos of totally extraperitoneal (TEP) inguinal hernia repair.
    Methods: Videos of TEP procedures were manually labeled by a team of annotators trained to identify and label surgical workflow according to six major steps. For bilateral hernias, an additional change of focus step was also included. The videos were then used to train a computer vision AI algorithm. Performance accuracy was assessed in comparison to the manual annotations.
    Results: A total of 619 full-length TEP videos were analyzed: 371 were used to train the model, 93 for internal validation, and the remaining 155 as a test set to evaluate algorithm accuracy. The overall accuracy for the complete procedure was 88.8%. Per-step accuracy reached the highest value for the hernia sac reduction step (94.3%) and the lowest for the preperitoneal dissection step (72.2%).
    Conclusions: These results indicate that the novel AI model was able to provide fully automated video analysis with a high accuracy level. High-accuracy models leveraging AI to enable automation of surgical video analysis allow us to identify and monitor surgical performance, providing mathematical metrics that can be stored, evaluated, and compared. As such, the proposed model is capable of enabling data-driven insights to improve surgical quality and demonstrate best practices in TEP procedures.
    MeSH term(s) Humans ; Hernia, Inguinal/surgery ; Laparoscopy/methods ; Artificial Intelligence ; Workflow ; Minimally Invasive Surgical Procedures ; Herniorrhaphy/methods ; Surgical Mesh
    Language English
    Publishing date 2023-08-25
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 639039-0
    ISSN 1432-2218 ; 0930-2794
    ISSN (online) 1432-2218
    ISSN 0930-2794
    DOI 10.1007/s00464-023-10375-5
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: Video Transformer Network

    Neimark, Daniel / Bar, Omri / Zohar, Maya / Asselmann, Dotan

    2021  

    Abstract: This paper presents VTN, a transformer-based framework for video recognition. Inspired by recent developments in vision transformers, we ditch the standard approach in video action recognition that relies on 3D ConvNets and introduce a method that ... ...

    Abstract This paper presents VTN, a transformer-based framework for video recognition. Inspired by recent developments in vision transformers, we ditch the standard approach in video action recognition that relies on 3D ConvNets and introduce a method that classifies actions by attending to the entire video sequence information. Our approach is generic and builds on top of any given 2D spatial network. In terms of wall runtime, it trains $16.1\times$ faster and runs $5.1\times$ faster during inference while maintaining competitive accuracy compared to other state-of-the-art methods. It enables whole video analysis, via a single end-to-end pass, while requiring $1.5\times$ fewer GFLOPs. We report competitive results on Kinetics-400 and present an ablation study of VTN properties and the trade-off between accuracy and inference speed. We hope our approach will serve as a new baseline and start a fresh line of research in the video recognition domain. Code and models are available at: https://github.com/bomri/SlowFast/blob/master/projects/vtn/README.md
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2021-02-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: Automated Identification of Key Steps in Robotic-Assisted Radical Prostatectomy Using Artificial Intelligence.

    Khanna, Abhinav / Antolin, Alenka / Bar, Omri / Ben-Ayoun, Danielle / Zohar, Maya / Boorjian, Stephen A / Frank, Igor / Shah, Paras / Sharma, Vidit / Thompson, R Houston / Wolf, Tamir / Asselmann, Dotan / Tollefson, Matthew

    The Journal of urology

    2024  Volume 211, Issue 4, Page(s) 575–584

    Abstract: Purpose: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel ... ...

    Abstract Purpose: The widespread use of minimally invasive surgery generates vast amounts of potentially useful data in the form of surgical video. However, raw video footage is often unstructured and unlabeled, thereby limiting its use. We developed a novel computer-vision algorithm for automated identification and labeling of surgical steps during robotic-assisted radical prostatectomy (RARP).
    Materials and methods: Surgical videos from RARP were manually annotated by a team of image annotators under the supervision of 2 urologic oncologists. Full-length surgical videos were labeled to identify all steps of surgery. These manually annotated videos were then utilized to train a computer vision algorithm to perform automated video annotation of RARP surgical video. Accuracy of automated video annotation was determined by comparing to manual human annotations as the reference standard.
    Results: A total of 474 full-length RARP videos (median 149 minutes; IQR 81 minutes) were manually annotated with surgical steps. Of these, 292 cases served as a training dataset for algorithm development, 69 cases were used for internal validation, and 113 were used as a separate testing cohort for evaluating algorithm accuracy. Concordance between artificial intelligence‒enabled automated video analysis and manual human video annotation was 92.8%. Algorithm accuracy was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection and extraction step (76.8%).
    Conclusions: We developed a fully automated artificial intelligence tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.
    MeSH term(s) Humans ; Male ; Artificial Intelligence ; Educational Status ; Prostate/surgery ; Prostatectomy/methods ; Robotic Surgical Procedures/methods ; Video Recording
    Language English
    Publishing date 2024-01-24
    Publishing country United States
    Document type Journal Article
    ZDB-ID 3176-8
    ISSN 1527-3792 ; 0022-5347
    ISSN (online) 1527-3792
    ISSN 0022-5347
    DOI 10.1097/JU.0000000000003845
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Rapid Nucleic Acid Reaction Circuits for Point-of-care Diagnosis of Diseases.

    Santiago-McRae, Ezry / Oh, Sung Won / Carlo, Anthony Monte / Bar, Omri / Guan, Emily / Zheng, Doris / Grgicak, Catherine / Fu, Jinglin

    Current topics in medicinal chemistry

    2022  Volume 22, Issue 8, Page(s) 686–698

    Abstract: An urgent need exists for a rapid, cost-effective, facile, and reliable nucleic acid assay for mass screening to control and prevent the spread of emerging pandemic diseases. This urgent need is not fully met by current diagnostic tools. In this review, ... ...

    Abstract An urgent need exists for a rapid, cost-effective, facile, and reliable nucleic acid assay for mass screening to control and prevent the spread of emerging pandemic diseases. This urgent need is not fully met by current diagnostic tools. In this review, we summarize the current state-of-the-art research in novel nucleic acid amplification and detection that could be applied to point-of-care (POC) diagnosis and mass screening of diseases. The critical technological breakthroughs will be discussed for their advantages and disadvantages. Finally, we will discuss the future challenges of developing nucleic acid-based POC diagnosis.
    MeSH term(s) Nucleic Acid Amplification Techniques ; Nucleic Acids ; Pandemics ; Point-of-Care Systems
    Chemical Substances Nucleic Acids
    Language English
    Publishing date 2022-02-09
    Publishing country United Arab Emirates
    Document type Journal Article ; Review
    ZDB-ID 2064823-6
    ISSN 1873-4294 ; 1568-0266
    ISSN (online) 1873-4294
    ISSN 1568-0266
    DOI 10.2174/1570163819666220207114148
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities.

    Deol, Ekamjit S / Tollefson, Matthew K / Antolin, Alenka / Zohar, Maya / Bar, Omri / Ben-Ayoun, Danielle / Mynderse, Lance A / Lomas, Derek J / Avant, Ross A / Miller, Adam R / Elliott, Daniel S / Boorjian, Stephen A / Wolf, Tamir / Asselmann, Dotan / Khanna, Abhinav

    Frontiers in artificial intelligence

    2024  Volume 7, Page(s) 1375482

    Abstract: Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR ... ...

    Abstract Objective: Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements.
    Materials and methods: Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard.
    Results: A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%).
    Conclusion: We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
    Language English
    Publishing date 2024-03-07
    Publishing country Switzerland
    Document type Journal Article
    ISSN 2624-8212
    ISSN (online) 2624-8212
    DOI 10.3389/frai.2024.1375482
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Impact of data on generalization of AI for surgical intelligence applications.

    Bar, Omri / Neimark, Daniel / Zohar, Maya / Hager, Gregory D / Girshick, Ross / Fried, Gerald M / Wolf, Tamir / Asselmann, Dotan

    Scientific reports

    2020  Volume 10, Issue 1, Page(s) 22208

    Abstract: AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much ...

    Abstract AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.
    Language English
    Publishing date 2020-12-17
    Publishing country England
    Document type Journal Article
    ZDB-ID 2615211-3
    ISSN 2045-2322 ; 2045-2322
    ISSN (online) 2045-2322
    ISSN 2045-2322
    DOI 10.1038/s41598-020-79173-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Identifying facial phenotypes of genetic disorders using deep learning.

    Gurovich, Yaron / Hanani, Yair / Bar, Omri / Nadav, Guy / Fleischer, Nicole / Gelbman, Dekel / Basel-Salmon, Lina / Krawitz, Peter M / Kamphausen, Susanne B / Zenker, Martin / Bird, Lynne M / Gripp, Karen W

    Nature medicine

    2019  Volume 25, Issue 1, Page(s) 60–64

    Abstract: Syndromic genetic conditions, in aggregate, affect 8% of the ... ...

    Abstract Syndromic genetic conditions, in aggregate, affect 8% of the population
    MeSH term(s) Algorithms ; Deep Learning ; Facies ; Genetic Diseases, Inborn/diagnosis ; Genotype ; Humans ; Image Processing, Computer-Assisted ; Phenotype ; Syndrome
    Language English
    Publishing date 2019-01-07
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1220066-9
    ISSN 1546-170X ; 1078-8956
    ISSN (online) 1546-170X
    ISSN 1078-8956
    DOI 10.1038/s41591-018-0279-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top