LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 147

Search options

  1. Article ; Online: IJCARS-IPCAI 2023 special issue: conference information processing for computer-assisted interventions, 14th International Conference 2023-part 1.

    Collins, Toby / Dou, Qi / Unberath, Mathias

    International journal of computer assisted radiology and surgery

    2023  Volume 18, Issue 6, Page(s) 969–970

    MeSH term(s) Humans ; Image Processing, Computer-Assisted ; Electronic Data Processing ; Computers
    Language English
    Publishing date 2023-06-03
    Publishing country Germany
    Document type Editorial
    ZDB-ID 2365628-1
    ISSN 1861-6429 ; 1861-6410
    ISSN (online) 1861-6429
    ISSN 1861-6410
    DOI 10.1007/s11548-023-02972-5
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Take a shot! Natural language control of intelligent robotic X-ray systems in surgery.

    Killeen, Benjamin D / Chaudhary, Shreayan / Osgood, Greg / Unberath, Mathias

    International journal of computer assisted radiology and surgery

    2024  

    Abstract: Purpose: The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via ... ...

    Abstract Purpose: The expanding capabilities of surgical systems bring with them increasing complexity in the interfaces that humans use to control them. Robotic C-arm X-ray imaging systems, for instance, often require manipulation of independent axes via joysticks, while higher-level control options hide inside device-specific menus. The complexity of these interfaces hinder "ready-to-hand" use of high-level functions. Natural language offers a flexible, familiar interface for surgeons to express their desired outcome rather than remembering the steps necessary to achieve it, enabling direct access to task-aware, patient-specific C-arm functionality.
    Methods: We present an English language voice interface for controlling a robotic X-ray imaging system with task-aware functions for pelvic trauma surgery. Our fully integrated system uses a large language model (LLM) to convert natural spoken commands into machine-readable instructions, enabling low-level commands like "Tilt back a bit," to increase the angular tilt or patient-specific directions like, "Go to the obturator oblique view of the right ramus," based on automated image analysis.
    Results: We evaluate our system with 212 prompts provided by an attending physician, in which the system performed satisfactory actions 97% of the time. To test the fully integrated system, we conduct a real-time study in which an attending physician placed orthopedic hardware along desired trajectories through an anthropomorphic phantom, interacting solely with an X-ray system via voice.
    Conclusion: Voice interfaces offer a convenient, flexible way for surgeons to manipulate C-arms based on desired outcomes rather than device-specific processes. As LLMs grow increasingly capable, so too will their applications in supporting higher-level interactions with surgical assistance systems.
    Language English
    Publishing date 2024-04-15
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2365628-1
    ISSN 1861-6429 ; 1861-6410
    ISSN (online) 1861-6429
    ISSN 1861-6410
    DOI 10.1007/s11548-024-03120-3
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers.

    Gao, Cong / Feng, Anqi / Liu, Xingtong / Taylor, Russell H / Armand, Mehran / Unberath, Mathias

    IEEE transactions on medical imaging

    2024  Volume 43, Issue 1, Page(s) 275–285

    Abstract: Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted ... ...

    Abstract Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
    MeSH term(s) Imaging, Three-Dimensional/methods ; Fluoroscopy/methods ; Pelvis ; Software ; Algorithms
    Language English
    Publishing date 2024-01-02
    Publishing country United States
    Document type Journal Article
    ZDB-ID 622531-7
    ISSN 1558-254X ; 0278-0062
    ISSN (online) 1558-254X
    ISSN 0278-0062
    DOI 10.1109/TMI.2023.3299588
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Rethinking causality-driven robot tool segmentation with temporal constraints.

    Ding, Hao / Wu, Jie Ying / Li, Zhaoshuo / Unberath, Mathias

    International journal of computer assisted radiology and surgery

    2023  Volume 18, Issue 6, Page(s) 1009–1016

    Abstract: Purpose: Vision-based robot tool segmentation plays a fundamental role in surgical robots perception and downstream tasks. CaRTS, based on a complementary causal model, has shown promising performance in unseen counterfactual surgical environments in ... ...

    Abstract Purpose: Vision-based robot tool segmentation plays a fundamental role in surgical robots perception and downstream tasks. CaRTS, based on a complementary causal model, has shown promising performance in unseen counterfactual surgical environments in the presence of smoke, blood, etc. However, CaRTS requires over 30 iterations of optimization to converge for a single image due to limited observability.
    Method: To address the above limitations, we take temporal relation into consideration and propose a temporal causal model for robot tool segmentation on video sequences. We design an architecture named Temporally Constrained CaRTS (TC-CaRTS). TC-CaRTS has three novel modules to complement CaRTS-temporal optimization pipeline, kinematics correction network, and spatial-temporal regularization.
    Results: Experiment results show that TC-CaRTS requires fewer iterations to achieve the same or better performance as CaRTS on different domains. All three modules are proven to be effective.
    Conclusion: We propose TC-CaRTS, which takes advantage of temporal constraints as additional observability. We show that TC-CaRTS outperforms prior work in the robot tool segmentation task with improved convergence speed on test datasets from different domains.
    MeSH term(s) Humans ; Neural Networks, Computer ; Robotics ; Biomechanical Phenomena ; Image Processing, Computer-Assisted/methods
    Language English
    Publishing date 2023-04-07
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2365628-1
    ISSN 1861-6429 ; 1861-6410
    ISSN (online) 1861-6429
    ISSN 1861-6410
    DOI 10.1007/s11548-023-02872-8
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Correction to: Nail it! vision-based drift correction for accurate mixed reality surgical guidance.

    Gu, Wenhao / Knopf, Jonathan / Cast, John / Higgins, Laurence D / Knopf, David / Unberath, Mathias

    International journal of computer assisted radiology and surgery

    2023  Volume 18, Issue 12, Page(s) 2357

    Language English
    Publishing date 2023-08-05
    Publishing country Germany
    Document type Published Erratum
    ZDB-ID 2365628-1
    ISSN 1861-6429 ; 1861-6410
    ISSN (online) 1861-6429
    ISSN 1861-6410
    DOI 10.1007/s11548-023-03008-8
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Toward automated interpretable AAST grading for blunt splenic injury.

    Chen, Haomin / Unberath, Mathias / Dreizin, David

    Emergency radiology

    2022  Volume 30, Issue 1, Page(s) 41–50

    Abstract: Background: The American Association for the Surgery of Trauma (AAST) splenic organ injury scale (OIS) is the most frequently used CT-based grading system for blunt splenic trauma. However, reported inter-rater agreement is modest, and an algorithm that ...

    Abstract Background: The American Association for the Surgery of Trauma (AAST) splenic organ injury scale (OIS) is the most frequently used CT-based grading system for blunt splenic trauma. However, reported inter-rater agreement is modest, and an algorithm that objectively automates grading based on transparent and verifiable criteria could serve as a high-trust diagnostic aid.
    Purpose: To pilot the development of an automated interpretable multi-stage deep learning-based system to predict AAST grade from admission trauma CT.
    Methods: Our pipeline includes 4 parts: (1) automated splenic localization, (2) Faster R-CNN-based detection of pseudoaneurysms (PSA) and active bleeds (AB), (3) nnU-Net segmentation and quantification of splenic parenchymal disruption (SPD), and (4) a directed graph that infers AAST grades from detection and segmentation results. Training and validation is performed on a dataset of adult patients (age ≥ 18) with voxelwise labeling, consensus AAST grading, and hemorrhage-related outcome data (n = 174).
    Results: AAST classification agreement (weighted κ) between automated and consensus AAST grades was substantial (0.79). High-grade (IV and V) injuries were predicted with accuracy, positive predictive value, and negative predictive value of 92%, 95%, and 89%. The area under the curve for predicting hemorrhage control intervention was comparable between expert consensus and automated AAST grading (0.83 vs 0.88). The mean combined inference time for the pipeline was 96.9 s.
    Conclusions: The results of our method were rapid and verifiable, with high agreement between automated and expert consensus grades. Diagnosis of high-grade lesions and prediction of hemorrhage control intervention produced accurate results in adult patients.
    MeSH term(s) Adult ; Humans ; United States ; Tomography, X-Ray Computed/methods ; Predictive Value of Tests ; Wounds, Nonpenetrating/surgery ; Spleen/injuries ; Hemorrhage ; Retrospective Studies
    Language English
    Publishing date 2022-11-12
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1425144-9
    ISSN 1438-1435 ; 1070-3004
    ISSN (online) 1438-1435
    ISSN 1070-3004
    DOI 10.1007/s10140-022-02099-1
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: AR-Loupe: Magnified Augmented Reality by Combining an Optical See-Through Head-Mounted Display and a Loupe.

    Qian, Long / Song, Tianyu / Unberath, Mathias / Kazanzides, Peter

    IEEE transactions on visualization and computer graphics

    2022  Volume 28, Issue 7, Page(s) 2550–2562

    Abstract: Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, ...

    Abstract Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.
    MeSH term(s) Augmented Reality ; Calibration ; Computer Graphics ; Smart Glasses ; User-Computer Interface
    Language English
    Publishing date 2022-05-26
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2020.3037284
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Book ; Online: TransNuSeg

    He, Zhenqi / Unberath, Mathias / Ke, Jing / Shen, Yiqing

    A Lightweight Multi-Task Transformer for Nuclei Segmentation

    2023  

    Abstract: Nuclei appear small in size, yet, in real clinical practice, the global spatial information and correlation of the color or brightness contrast between nuclei and background, have been considered a crucial component for accurate nuclei segmentation. ... ...

    Abstract Nuclei appear small in size, yet, in real clinical practice, the global spatial information and correlation of the color or brightness contrast between nuclei and background, have been considered a crucial component for accurate nuclei segmentation. However, the field of automatic nuclei segmentation is dominated by Convolutional Neural Networks (CNNs), meanwhile, the potential of the recently prevalent Transformers has not been fully explored, which is powerful in capturing local-global correlations. To this end, we make the first attempt at a pure Transformer framework for nuclei segmentation, called TransNuSeg. Different from prior work, we decouple the challenging nuclei segmentation task into an intrinsic multi-task learning task, where a tri-decoder structure is employed for nuclei instance, nuclei edge, and clustered edge segmentation respectively. To eliminate the divergent predictions from different branches in previous work, a novel self distillation loss is introduced to explicitly impose consistency regulation between branches. Moreover, to formulate the high correlation between branches and also reduce the number of parameters, an efficient attention sharing scheme is proposed by partially sharing the self-attention heads amongst the tri-decoders. Finally, a token MLP bottleneck replaces the over-parameterized Transformer bottleneck for a further reduction in model complexity. Experiments on two datasets of different modalities, including MoNuSeg have shown that our methods can outperform state-of-the-art counterparts such as CA2.5-Net by 2-3% Dice with 30% fewer parameters. In conclusion, TransNuSeg confirms the strength of Transformer in the context of nuclei segmentation, which thus can serve as an efficient solution for real clinical practice. Code is available at https://github.com/zhenqi-he/transnuseg.

    Comment: Early accepted by MICCAI2023
    Keywords Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-07-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: Assessment of linear regression of peripapillary optical coherence tomography retinal nerve fiber layer measurements to forecast glacuoma trajectory.

    Bradley, Chris / Hou, Kaihua / Herbert, Patrick / Unberath, Mathias / Hager, Greg / Boland, Michael V / Ramulu, Pradeep / Yohannan, Jithin

    PloS one

    2024  Volume 19, Issue 1, Page(s) e0296674

    Abstract: Linear regression of optical coherence tomography measurements of peripapillary retinal nerve fiber layer thickness is often used to detect glaucoma progression and forecast future disease course. However, current measurement frequencies suggest that ... ...

    Abstract Linear regression of optical coherence tomography measurements of peripapillary retinal nerve fiber layer thickness is often used to detect glaucoma progression and forecast future disease course. However, current measurement frequencies suggest that clinicians often apply linear regression to a relatively small number of measurements (e.g., less than a handful). In this study, we estimate the accuracy of linear regression in predicting the next reliable measurement of average retinal nerve fiber layer thickness using Zeiss Cirrus optical coherence tomography measurements of average retinal nerve fiber layer thickness from a sample of 6,471 eyes with glaucoma or glaucoma-suspect status. Linear regression is compared to two null models: no glaucoma worsening, and worsening due to aging. Linear regression on the first M ≥ 2 measurements was significantly worse at predicting a reliable M+1st measurement for 2 ≤ M ≤ 6. This range was reduced to 2 ≤ M ≤ 5 when retinal nerve fiber layer thickness measurements were first "corrected" for scan quality. Simulations based on measurement frequencies in our sample-on average 393 ± 190 days between consecutive measurements-show that linear regression outperforms both null models when M ≥ 5 and the goal is to forecast moderate (75th percentile) worsening, and when M ≥ 3 for rapid (90th percentile) worsening. If linear regression is used to assess disease trajectory with a small number of measurements over short time periods (e.g., 1-2 years), as is often the case in clinical practice, the number of optical coherence tomography examinations needs to be increased.
    MeSH term(s) Humans ; Tomography, Optical Coherence/methods ; Linear Models ; Retinal Ganglion Cells ; Glaucoma/diagnostic imaging ; Nerve Fibers ; Intraocular Pressure
    Language English
    Publishing date 2024-01-12
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2267670-3
    ISSN 1932-6203 ; 1932-6203
    ISSN (online) 1932-6203
    ISSN 1932-6203
    DOI 10.1371/journal.pone.0296674
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Cognitive effort detection for tele-robotic surgery via personalized pupil response modeling.

    Büter, Regine / Soberanis-Mukul, Roger D / Shankar, Rohit / Ruiz Puentes, Paola / Ghazi, Ahmed / Wu, Jie Ying / Unberath, Mathias

    International journal of computer assisted radiology and surgery

    2024  

    Abstract: Purpose: Gaze tracking and pupillometry are established proxies for cognitive load, giving insights into a user's mental effort. In tele-robotic surgery, knowing a user's cognitive load can inspire novel human-machine interaction designs, fostering ... ...

    Abstract Purpose: Gaze tracking and pupillometry are established proxies for cognitive load, giving insights into a user's mental effort. In tele-robotic surgery, knowing a user's cognitive load can inspire novel human-machine interaction designs, fostering contextual surgical assistance systems and personalized training programs. While pupillometry-based methods for estimating cognitive effort have been proposed, their application in surgery is limited by the pupil's sensitivity to brightness changes, which can mask pupil's response to cognitive load. Thus, methods considering pupil and brightness conditions are essential for detecting cognitive effort in unconstrained scenarios.
    Methods: To contend with this challenge, we introduce a personalized pupil response model integrating pupil and brightness-based features. Discrepancies between predicted and measured pupil diameter indicate dilations due to non-brightness-related sources, i.e., cognitive effort. Combined with gaze entropy, it can detect cognitive load using a random forest classifier. To test our model, we perform a user study with the da Vinci Research Kit, where 17 users perform pick-and-place tasks in addition to auditory tasks known to generate cognitive effort responses.
    Results: We compare our method to two baselines (BCPD and CPD), demonstrating favorable performance in varying brightness conditions. Our method achieves an average true positive rate of 0.78, outperforming the baselines (0.57 and 0.64).
    Conclusion: We present a personalized brightness-aware model for cognitive effort detection able to operate under unconstrained brightness conditions, comparing favorably to competing approaches, contributing to the advancement of cognitive effort detection in tele-robotic surgery. Future work will consider alternative learning strategies, handling the difficult positive-unlabeled scenario in user studies, where only some positive and no negative events are reliably known.
    Language English
    Publishing date 2024-04-08
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2365628-1
    ISSN 1861-6429 ; 1861-6410
    ISSN (online) 1861-6429
    ISSN 1861-6410
    DOI 10.1007/s11548-024-03108-z
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top