LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 7 of total 7

Search options

  1. Article ; Online: FUN-SIS: A Fully UNsupervised approach for Surgical Instrument Segmentation.

    Sestini, Luca / Rosa, Benoit / De Momi, Elena / Ferrigno, Giancarlo / Padoy, Nicolas

    Medical image analysis

    2023  Volume 85, Page(s) 102751

    Abstract: Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth ...

    Abstract Automatic surgical instrument segmentation of endoscopic images is a crucial building block of many computer-assistance applications for minimally invasive surgery. So far, state-of-the-art approaches completely rely on the availability of a ground-truth supervision signal, obtained via manual annotation, thus expensive to collect at large scale. In this paper, we present FUN-SIS, a Fully-UNsupervised approach for binary Surgical Instrument Segmentation. FUN-SIS trains a per-frame segmentation model on completely unlabelled endoscopic videos, by solely relying on implicit motion information and instrument shape-priors. We define shape-priors as realistic segmentation masks of the instruments, not necessarily coming from the same dataset/domain as the videos. The shape-priors can be collected in various and convenient ways, such as recycling existing annotations from other datasets. We leverage them as part of a novel generative-adversarial approach, allowing to perform unsupervised instrument segmentation of optical-flow images during training. We then use the obtained instrument masks as pseudo-labels in order to train a per-frame segmentation model; to this aim, we develop a learning-from-noisy-labels architecture, designed to extract a clean supervision signal from these pseudo-labels, leveraging their peculiar noise properties. We validate the proposed contributions on three surgical datasets, including the MICCAI 2017 EndoVis Robotic Instrument Segmentation Challenge dataset. The obtained fully-unsupervised results for surgical instrument segmentation are almost on par with the ones of fully-supervised state-of-the-art approaches. This suggests the tremendous potential of the proposed method to leverage the great amount of unlabelled data produced in the context of minimally invasive surgery.
    MeSH term(s) Humans ; Image Processing, Computer-Assisted/methods ; Endoscopy ; Surgical Instruments ; Robotics
    Language English
    Publishing date 2023-01-20
    Publishing country Netherlands
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1356436-5
    ISSN 1361-8423 ; 1361-8431 ; 1361-8415
    ISSN (online) 1361-8423 ; 1361-8431
    ISSN 1361-8415
    DOI 10.1016/j.media.2023.102751
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Applications of artificial intelligence in surgery: clinical, technical, and governance considerations.

    Mascagni, Pietro / Alapatt, Deepak / Sestini, Luca / Yu, Tong / Alfieri, Sergio / Morales-Conde, Salvador / Padoy, Nicolas / Perretta, Silvana

    Cirugia espanola

    2024  

    Abstract: Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the ...

    Abstract Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.
    Language English
    Publishing date 2024-05-03
    Publishing country Spain
    Document type Journal Article
    ISSN 2173-5077
    ISSN (online) 2173-5077
    DOI 10.1016/j.cireng.2024.04.009
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Dissecting self-supervised learning methods for surgical computer vision.

    Ramesh, Sanat / Srivastav, Vinkle / Alapatt, Deepak / Yu, Tong / Murali, Aditya / Sestini, Luca / Nwoye, Chinedu Innocent / Hamoud, Idris / Sharma, Saurav / Fleurentin, Antoine / Exarchakis, Georgios / Karargyris, Alexandros / Padoy, Nicolas

    Medical image analysis

    2023  Volume 88, Page(s) 102844

    Abstract: The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of ...

    Abstract The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.
    MeSH term(s) Humans ; Computers ; Neural Networks, Computer ; Supervised Machine Learning
    Language English
    Publishing date 2023-05-24
    Publishing country Netherlands
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1356436-5
    ISSN 1361-8423 ; 1361-8431 ; 1361-8415
    ISSN (online) 1361-8423 ; 1361-8431
    ISSN 1361-8415
    DOI 10.1016/j.media.2023.102844
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: A Kinematic Bottleneck Approach For Pose Regression of Flexible Surgical Instruments directly from Images

    Sestini, Luca / Rosa, Benoit / De Momi, Elena / Ferrigno, Giancarlo / Padoy, Nicolas

    2021  

    Abstract: 3-D pose estimation of instruments is a crucial step towards automatic scene understanding in robotic minimally invasive surgery. Although robotic systems can potentially directly provide joint values, this information is not commonly exploited inside ... ...

    Abstract 3-D pose estimation of instruments is a crucial step towards automatic scene understanding in robotic minimally invasive surgery. Although robotic systems can potentially directly provide joint values, this information is not commonly exploited inside the operating room, due to its possible unreliability, limited access and the time-consuming calibration required, especially for continuum robots. For this reason, standard approaches for 3-D pose estimation involve the use of external tracking systems. Recently, image-based methods have emerged as promising, non-invasive alternatives. While many image-based approaches in the literature have shown accurate results, they generally require either a complex iterative optimization for each processed image, making them unsuitable for real-time applications, or a large number of manually-annotated images for efficient learning. In this paper we propose a self-supervised image-based method, exploiting, at training time only, the imprecise kinematic information provided by the robot. In order to avoid introducing time-consuming manual annotations, the problem is formulated as an auto-encoder, smartly bottlenecked by the presence of a physical model of the robotic instruments and surgical camera, forcing a separation between image background and kinematic content. Validation of the method was performed on semi-synthetic, phantom and in-vivo datasets, obtained using a flexible robotized endoscope, showing promising results for real-time image-based 3-D pose estimation of surgical instruments.
    Keywords Computer Science - Robotics ; Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 629
    Publishing date 2021-02-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Computer vision in surgery: from potential to clinical value.

    Mascagni, Pietro / Alapatt, Deepak / Sestini, Luca / Altieri, Maria S / Madani, Amin / Watanabe, Yusuke / Alseidi, Adnan / Redan, Jay A / Alfieri, Sergio / Costamagna, Guido / Boškoski, Ivo / Padoy, Nicolas / Hashimoto, Daniel A

    NPJ digital medicine

    2022  Volume 5, Issue 1, Page(s) 163

    Abstract: Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture ... ...

    Abstract Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
    Language English
    Publishing date 2022-10-28
    Publishing country England
    Document type Journal Article ; Review
    ISSN 2398-6352
    ISSN (online) 2398-6352
    DOI 10.1038/s41746-022-00707-5
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Dissecting Self-Supervised Learning Methods for Surgical Computer Vision

    Ramesh, Sanat / Srivastav, Vinkle / Alapatt, Deepak / Yu, Tong / Murali, Aditya / Sestini, Luca / Nwoye, Chinedu Innocent / Hamoud, Idris / Sharma, Saurav / Fleurentin, Antoine / Exarchakis, Georgios / Karargyris, Alexandros / Padoy, Nicolas

    2022  

    Abstract: The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of ...

    Abstract The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2022-07-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Surgical tool classification and localization

    Zia, Aneeq / Bhattacharyya, Kiran / Liu, Xi / Berniker, Max / Wang, Ziheng / Nespolo, Rogerio / Kondo, Satoshi / Kasai, Satoshi / Hirasawa, Kousuke / Liu, Bo / Austin, David / Wang, Yiheng / Futrega, Michal / Puget, Jean-Francois / Li, Zhenqiang / Sato, Yoichi / Fujii, Ryo / Hachiuma, Ryo / Masuda, Mana /
    Saito, Hideo / Wang, An / Xu, Mengya / Islam, Mobarakol / Bai, Long / Pang, Winnie / Ren, Hongliang / Nwoye, Chinedu / Sestini, Luca / Padoy, Nicolas / Nielsen, Maximilian / Schüttler, Samuel / Sentker, Thilo / Husseini, Hümeyra / Baltruschat, Ivo / Schmitz, Rüdiger / Werner, René / Matsun, Aleksandr / Farooq, Mugariya / Saaed, Numan / Viera, Jose Renato Restom / Yaqub, Mohammad / Getty, Neil / Xia, Fangfang / Zhao, Zixuan / Duan, Xiaotian / Yao, Xing / Lou, Ange / Yang, Hao / Han, Jintong / Noble, Jack / Wu, Jie Ying / Alshirbaji, Tamer Abdulbaki / Jalal, Nour Aldeen / Arabian, Herag / Ding, Ning / Moeller, Knut / Chen, Weiliang / He, Quan / Bilal, Muhammad / Akinosho, Taofeek / Qayyum, Adnan / Caputo, Massimo / Vohra, Hunaid / Loizou, Michael / Ajayi, Anuoluwapo / Berrou, Ilhem / Niyi-Odumosu, Faatihah / Maier-Hein, Lena / Stoyanov, Danail / Speidel, Stefanie / Jarc, Anthony

    results and methods from the MICCAI 2022 SurgToolLoc challenge

    2023  

    Abstract: The ability to automatically detect and track surgical instruments in endoscopic videos can enable transformational interventions. Assessing surgical performance and efficiency, identifying skilled tool use and choreography, and planning operational and ... ...

    Abstract The ability to automatically detect and track surgical instruments in endoscopic videos can enable transformational interventions. Assessing surgical performance and efficiency, identifying skilled tool use and choreography, and planning operational and logistical aspects of OR resources are just a few of the applications that could benefit. Unfortunately, obtaining the annotations needed to train machine learning models to identify and localize surgical tools is a difficult task. Annotating bounding boxes frame-by-frame is tedious and time-consuming, yet large amounts of data with a wide variety of surgical tools and surgeries must be captured for robust training. Moreover, ongoing annotator training is needed to stay up to date with surgical instrument innovation. In robotic-assisted surgery, however, potentially informative data like timestamps of instrument installation and removal can be programmatically harvested. The ability to rely on tool installation data alone would significantly reduce the workload to train robust tool-tracking models. With this motivation in mind we invited the surgical data science community to participate in the challenge, SurgToolLoc 2022. The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools and localize them in video frames with bounding boxes. We present the results of this challenge along with many of the team's efforts. We conclude by discussing these results in the broader context of machine learning and surgical data science. The training data used for this challenge consisting of 24,695 video clips with tool presence labels is also being released publicly and can be accessed at https://console.cloud.google.com/storage/browser/isi-surgtoolloc-2022.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 670
    Publishing date 2023-05-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top