LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 54

Search options

  1. Book ; Online ; Thesis: Anwendung von Künstlicher Intelligenz und Strukturierter Befundung in der Radiologischen Diagnostik

    Sabel, Bastian [Verfasser]

    2021  

    Author's details Bastian Sabel
    Keywords Medizin, Gesundheit ; Medicine, Health
    Subject code sg610
    Language German
    Publisher Universitätsbibliothek der Ludwig-Maximilians-Universität
    Publishing place München
    Document type Book ; Online ; Thesis
    Database Digital theses on the web

    More links

    Kategorien

  2. Article ; Online: WindowNet: Learnable Windows for Chest X-ray Classification.

    Wollek, Alessandro / Hyska, Sardi / Sabel, Bastian / Ingrisch, Michael / Lasser, Tobias

    Journal of imaging

    2023  Volume 9, Issue 12

    Abstract: Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle ... ...

    Abstract Public chest X-ray (CXR) data sets are commonly compressed to a lower bit depth to reduce their size, potentially hiding subtle diagnostic features. In contrast, radiologists apply a windowing operation to the uncompressed image to enhance such subtle features. While it has been shown that windowing improves classification performance on computed tomography (CT) images, the impact of such an operation on CXR classification performance remains unclear. In this study, we show that windowing strongly improves the CXR classification performance of machine learning models and propose WindowNet, a model that learns multiple optimal window settings. Our model achieved an average AUC score of 0.812 compared with the 0.759 score of a commonly used architecture without windowing capabilities on the MIMIC data set.
    Language English
    Publishing date 2023-12-06
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2824270-1
    ISSN 2313-433X ; 2313-433X
    ISSN (online) 2313-433X
    ISSN 2313-433X
    DOI 10.3390/jimaging9120270
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Out-of-distribution detection with in-distribution voting using the medical example of chest x-ray classification.

    Wollek, Alessandro / Willem, Theresa / Ingrisch, Michael / Sabel, Bastian / Lasser, Tobias

    Medical physics

    2023  Volume 51, Issue 4, Page(s) 2721–2732

    Abstract: Background: Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, ... ...

    Abstract Background: Deep learning models are being applied to more and more use cases with astonishing success stories, but how do they perform in the real world? Models are typically tested on specific cleaned data sets, but when deployed in the real world, the model will encounter unexpected, out-of-distribution (OOD) data.
    Purpose: To investigate the impact of OOD radiographs on existing chest x-ray classification models and to increase their robustness against OOD data.
    Methods: The study employed the commonly used chest x-ray classification model, CheXnet, trained on the chest x-ray 14 data set, and tested its robustness against OOD data using three public radiography data sets: IRMA, Bone Age, and MURA, and the ImageNet data set. To detect OOD data for multi-label classification, we proposed in-distribution voting (IDV). The OOD detection performance is measured across data sets using the area under the receiver operating characteristic curve (AUC) analysis and compared with Mahalanobis-based OOD detection, MaxLogit, MaxEnergy, self-supervised OOD detection (SS OOD), and CutMix.
    Results: Without additional OOD detection, the chest x-ray classifier failed to discard any OOD images, with an AUC of 0.5. The proposed IDV approach trained on ID (chest x-ray 14) and OOD data (IRMA and ImageNet) achieved, on average, 0.999 OOD AUC across the three data sets, surpassing all other OOD detection methods. Mahalanobis-based OOD detection achieved an average OOD detection AUC of 0.982. IDV trained solely with a few thousand ImageNet images had an AUC 0.913, which was considerably higher than MaxLogit (0.726), MaxEnergy (0.724), SS OOD (0.476), and CutMix (0.376).
    Conclusions: The performance of all tested OOD detection methods did not translate well to radiography data sets, except Mahalanobis-based OOD detection and the proposed IDV method. Consequently, training solely on ID data led to incorrect classification of OOD images as ID, resulting in increased false positive rates. IDV substantially improved the model's ID classification performance, even when trained with data that will not occur in the intended use case or test set (ImageNet), without additional inference overhead or performance decrease in the target classification. The corresponding code is available at https://gitlab.lrz.de/IP/a-knee-cannot-have-lung-disease.
    MeSH term(s) Voting ; X-Rays ; Radiography ; ROC Curve
    Language English
    Publishing date 2023-10-13
    Publishing country United States
    Document type Journal Article
    ZDB-ID 188780-4
    ISSN 2473-4209 ; 0094-2405
    ISSN (online) 2473-4209
    ISSN 0094-2405
    DOI 10.1002/mp.16790
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: German CheXpert Chest X-ray Radiology Report Labeler.

    Wollek, Alessandro / Hyska, Sardi / Sedlmeyr, Thomas / Haitzer, Philip / Rueckel, Johannes / Sabel, Bastian O / Ingrisch, Michael / Lasser, Tobias

    RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin

    2024  

    Abstract: Purpose:  The aim of this study was to develop an algorithm to automatically extract annotations from German thoracic radiology reports to train deep learning-based chest X-ray classification models.: Materials and methods:  An automatic label ... ...

    Title translation Deutscher CheXpert-Röntgenthorax-Befundlabeler.
    Abstract Purpose:  The aim of this study was to develop an algorithm to automatically extract annotations from German thoracic radiology reports to train deep learning-based chest X-ray classification models.
    Materials and methods:  An automatic label extraction model for German thoracic radiology reports was designed based on the CheXpert architecture. The algorithm can extract labels for twelve common chest pathologies, the presence of support devices, and "no finding". For iterative improvements and to generate a ground truth, a web-based multi-reader annotation interface was created. With the proposed annotation interface, a radiologist annotated 1086 retrospectively collected radiology reports from 2020-2021 (data set 1). The effect of automatically extracted labels on chest radiograph classification performance was evaluated on an additional, in-house pneumothorax data set (data set 2), containing 6434 chest radiographs with corresponding reports, by comparing a DenseNet-121 model trained on extracted labels from the associated reports, image-based pneumothorax labels, and publicly available data, respectively.
    Results:  Comparing automated to manual labeling on data set 1: "mention extraction" class-wise F1 scores ranged from 0.8 to 0.995, the "negation detection" F1 scores from 0.624 to 0.981, and F1 scores for "uncertainty detection" from 0.353 to 0.725. Extracted pneumothorax labels on data set 2 had a sensitivity of 0.997 [95 % CI: 0.994, 0.999] and specificity of 0.991 [95 % CI: 0.988, 0.994]. The model trained on publicly available data achieved an area under the receiver operating curve (AUC) for pneumothorax classification of 0.728 [95 % CI: 0.694, 0.760], while the models trained on automatically extracted labels and on manual annotations achieved values of 0.858 [95 % CI: 0.832, 0.882] and 0.934 [95 % CI: 0.918, 0.949], respectively.
    Conclusion:  Automatic label extraction from German thoracic radiology reports is a promising substitute for manual labeling. By reducing the time required for data annotation, larger training data sets can be created, resulting in improved overall modeling performance. Our results demonstrated that a pneumothorax classifier trained on automatically extracted labels strongly outperformed the model trained on publicly available data, without the need for additional annotation time and performed competitively compared to manually labeled data.
    Key points:   · An algorithm for automatic German thoracic radiology report annotation was developed.. · Automatic label extraction is a promising substitute for manual labeling.. · The classifier trained on extracted labels outperformed the model trained on publicly available data..
    Language English
    Publishing date 2024-01-31
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 554830-5
    ISSN 1438-9010 ; 0340-1618 ; 0936-6652 ; 1433-5972 ; 1438-9029
    ISSN (online) 1438-9010
    ISSN 0340-1618 ; 0936-6652 ; 1433-5972 ; 1438-9029
    DOI 10.1055/a-2234-8268
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Language model-based labeling of German thoracic radiology reports.

    Wollek, Alessandro / Haitzer, Philip / Sedlmeyr, Thomas / Hyska, Sardi / Rueckel, Johannes / Sabel, Bastian O / Ingrisch, Michael / Lasser, Tobias

    RoFo : Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin

    2024  

    Abstract: The aim of this study was to explore the potential of weak supervision in a deep learning-based label prediction model. The goal was to use this model to extract labels from German free-text thoracic radiology reports on chest X-ray images and for ... ...

    Title translation Sprachmodellbasiertes Labeling Deutscher Röntgenthoraxbefunde.
    Abstract The aim of this study was to explore the potential of weak supervision in a deep learning-based label prediction model. The goal was to use this model to extract labels from German free-text thoracic radiology reports on chest X-ray images and for training chest X-ray classification models.The proposed label extraction model for German thoracic radiology reports uses a German BERT encoder as a backbone and classifies a report based on the CheXpert labels. For investigating the efficient use of manually annotated data, the model was trained using manual annotations, weak rule-based labels, and both. Rule-based labels were extracted from 66071 retrospectively collected radiology reports from 2017-2021 (DS 0), and 1091 reports from 2020-2021 (DS 1) were manually labeled according to the CheXpert classes. Label extraction performance was evaluated with respect to mention extraction, negation detection, and uncertainty detection by measuring F1 scores. The influence of the label extraction method on chest X-ray classification was evaluated on a pneumothorax data set (DS 2) containing 6434 chest radiographs with associated reports and expert diagnoses of pneumothorax. For this, DenseNet-121 models trained on manual annotations, rule-based and deep learning-based label predictions, and publicly available data were compared.The proposed deep learning-based labeler (DL) performed on average considerably stronger than the rule-based labeler (RB) for all three tasks on DS 1 with F1 scores of 0.938 vs. 0.844 for mention extraction, 0.891 vs. 0.821 for negation detection, and 0.624 vs. 0.518 for uncertainty detection. Pre-training on DS 0 and fine-tuning on DS 1 performed better than only training on either DS 0 or DS 1. Chest X-ray pneumothorax classification results (DS 2) were highest when trained with DL labels with an area under the receiver operating curve (AUC) of 0.939 compared to RB labels with an AUC of 0.858. Training with manual labels performed slightly worse than training with DL labels with an AUC of 0.934. In contrast, training with a public data set resulted in an AUC of 0.720.Our results show that leveraging a rule-based report labeler for weak supervision leads to improved labeling performance. The pneumothorax classification results demonstrate that our proposed deep learning-based labeler can serve as a substitute for manual labeling requiring only 1000 manually annotated reports for training. · The proposed deep learning-based label extraction model for German thoracic radiology reports performs better than the rule-based model.. · Training with limited supervision outperformed training with a small manually labeled data set.. · Using predicted labels for pneumothorax classification from chest radiographs performed equally to using manual annotations.. Wollek A, Haitzer P, Sedlmeyr T et al. Language modelbased labeling of German thoracic radiology reports. Fortschr Röntgenstr 2024; DOI 10.1055/a-2287-5054.
    Language English
    Publishing date 2024-04-25
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 554830-5
    ISSN 1438-9010 ; 0340-1618 ; 0936-6652 ; 1433-5972 ; 1438-9029
    ISSN (online) 1438-9010
    ISSN 0340-1618 ; 0936-6652 ; 1433-5972 ; 1438-9029
    DOI 10.1055/a-2287-5054
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Exploring the Impact of Image Resolution on Chest X-ray Classification Performance

    Wollek, Alessandro / Hyska, Sardi / Sabel, Bastian / Ingrisch, Michael / Lasser, Tobias

    2023  

    Abstract: Deep learning models for image classification have often used a resolution of $224\times224$ pixels for computational reasons. This study investigates the effect of image resolution on chest X-ray classification performance, using the ChestX-ray14 ... ...

    Abstract Deep learning models for image classification have often used a resolution of $224\times224$ pixels for computational reasons. This study investigates the effect of image resolution on chest X-ray classification performance, using the ChestX-ray14 dataset. The results show that a higher image resolution, specifically $1024\times1024$ pixels, has the best overall classification performance, with a slight decline in performance between $256\times256$ to $512\times512$ pixels for most of the pathological classes. Comparison of saliency map-generated bounding boxes revealed that commonly used resolutions are insufficient for finding most pathologies.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Publishing date 2023-06-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: WindowNet

    Wollek, Alessandro / Hyska, Sardi / Sabel, Bastian / Ingrisch, Michael / Lasser, Tobias

    Learnable Windows for Chest X-ray Classification

    2023  

    Abstract: Chest X-ray (CXR) images are commonly compressed to a lower resolution and bit depth to reduce their size, potentially altering subtle diagnostic features. Radiologists use windowing operations to enhance image contrast, but the impact of such operations ...

    Abstract Chest X-ray (CXR) images are commonly compressed to a lower resolution and bit depth to reduce their size, potentially altering subtle diagnostic features. Radiologists use windowing operations to enhance image contrast, but the impact of such operations on CXR classification performance is unclear. In this study, we show that windowing can improve CXR classification performance, and propose WindowNet, a model that learns optimal window settings. We first investigate the impact of bit-depth on classification performance and find that a higher bit-depth (12-bit) leads to improved performance. We then evaluate different windowing settings and show that training with a distinct window generally improves pathology-wise classification performance. Finally, we propose and evaluate WindowNet, a model that learns optimal window settings, and show that it significantly improves performance compared to the baseline model without windowing.
    Keywords Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-06-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: Artificial Intelligence to Assess Tracheal Tubes and Central Venous Catheters in Chest Radiographs Using an Algorithmic Approach With Adjustable Positioning Definitions.

    Rueckel, Johannes / Huemmer, Christian / Shahidi, Casra / Buizza, Giulia / Hoppe, Boj Friedrich / Liebig, Thomas / Ricke, Jens / Rudolph, Jan / Sabel, Bastian Oliver

    Investigative radiology

    2023  Volume 59, Issue 4, Page(s) 306–313

    Abstract: Purpose: To develop and validate an artificial intelligence algorithm for the positioning assessment of tracheal tubes (TTs) and central venous catheters (CVCs) in supine chest radiographs (SCXRs) by using an algorithm approach allowing for adjustable ... ...

    Abstract Purpose: To develop and validate an artificial intelligence algorithm for the positioning assessment of tracheal tubes (TTs) and central venous catheters (CVCs) in supine chest radiographs (SCXRs) by using an algorithm approach allowing for adjustable definitions of intended device positioning.
    Materials and methods: Positioning quality of CVCs and TTs is evaluated by spatially correlating the respective tip positions with anatomical structures. For CVC analysis, a configurable region of interest is defined to approximate the expected region of well-positioned CVC tips from segmentations of anatomical landmarks. The CVC/TT information is estimated by introducing a new multitask neural network architecture for jointly performing type/existence classification, course segmentation, and tip detection. Validation data consisted of 589 SCXRs that have been radiologically annotated for inserted TTs/CVCs, including an experts' categorical positioning assessment (reading 1). In-image positions of algorithm-detected TT/CVC tips could be corrected using a validation software tool (reading 2) that finally allowed for localization accuracy quantification. Algorithmic detection of images with misplaced devices (reading 1 as reference standard) was quantified by receiver operating characteristics.
    Results: Supine chest radiographs were correctly classified according to inserted TTs/CVCs in 100%/98% of the cases, thereby with high accuracy in also spatially localizing the medical device tips: corrections less than 3 mm in >86% (TTs) and 77% (CVCs) of the cases. Chest radiographs with malpositioned devices were detected with area under the curves of >0.98 (TTs), >0.96 (CVCs with accidental vessel turnover), and >0.93 (also suboptimal CVC insertion length considered). The receiver operating characteristics limitations regarding CVC assessment were mainly caused by limitations of the applied CXR position definitions (region of interest derived from anatomical landmarks), not by algorithmic spatial detection inaccuracies.
    Conclusions: The TT and CVC tips were accurately localized in SCXRs by the presented algorithms, but triaging applications for CVC positioning assessment still suffer from the vague definition of optimal CXR positioning. Our algorithm, however, allows for an adjustment of these criteria, theoretically enabling them to meet user-specific or patient subgroups requirements. Besides CVC tip analysis, future work should also include specific course analysis for accidental vessel turnover detection.
    MeSH term(s) Humans ; Central Venous Catheters ; Catheterization, Central Venous/methods ; Artificial Intelligence ; Radiography ; Radiography, Thoracic/methods
    Language English
    Publishing date 2023-09-08
    Publishing country United States
    Document type Journal Article
    ZDB-ID 80345-5
    ISSN 1536-0210 ; 0020-9996
    ISSN (online) 1536-0210
    ISSN 0020-9996
    DOI 10.1097/RLI.0000000000001018
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Implementing Artificial Intelligence for Emergency Radiology Impacts Physicians' Knowledge and Perception: A Prospective Pre- and Post-Analysis.

    Hoppe, Boj Friedrich / Rueckel, Johannes / Dikhtyar, Yevgeniy / Heimer, Maurice / Fink, Nicola / Sabel, Bastian Oliver / Ricke, Jens / Rudolph, Jan / Cyran, Clemens C

    Investigative radiology

    2023  Volume 59, Issue 5, Page(s) 404–412

    Abstract: Purpose: The aim of this study was to evaluate the impact of implementing an artificial intelligence (AI) solution for emergency radiology into clinical routine on physicians' perception and knowledge.: Materials and methods: A prospective ... ...

    Abstract Purpose: The aim of this study was to evaluate the impact of implementing an artificial intelligence (AI) solution for emergency radiology into clinical routine on physicians' perception and knowledge.
    Materials and methods: A prospective interventional survey was performed pre-implementation and 3 months post-implementation of an AI algorithm for fracture detection on radiographs in late 2022. Radiologists and traumatologists were asked about their knowledge and perception of AI on a 7-point Likert scale (-3, "strongly disagree"; +3, "strongly agree"). Self-generated identification codes allowed matching the same individuals pre-intervention and post-intervention, and using Wilcoxon signed rank test for paired data.
    Results: A total of 47/71 matched participants completed both surveys (66% follow-up rate) and were eligible for analysis (34 radiologists [72%], 13 traumatologists [28%], 15 women [32%]; mean age, 34.8 ± 7.8 years). Postintervention, there was an increase that AI "reduced missed findings" (1.28 [pre] vs 1.94 [post], P = 0.003) and made readers "safer" (1.21 vs 1.64, P = 0.048), but not "faster" (0.98 vs 1.21, P = 0.261). There was a rising disagreement that AI could "replace the radiological report" (-2.04 vs -2.34, P = 0.038), as well as an increase in self-reported knowledge about "clinical AI," its "chances," and its "risks" (0.40 vs 1.00, 1.21 vs 1.70, and 0.96 vs 1.34; all P 's ≤ 0.028). Radiologists used AI results more frequently than traumatologists ( P < 0.001) and rated benefits higher (all P 's ≤ 0.038), whereas senior physicians were less likely to use AI or endorse its benefits (negative correlation with age, -0.35 to 0.30; all P 's ≤ 0.046).
    Conclusions: Implementing AI for emergency radiology into clinical routine has an educative aspect and underlines the concept of AI as a "second reader," to support and not replace physicians.
    MeSH term(s) Female ; Humans ; Adult ; Artificial Intelligence ; Prospective Studies ; Radiology ; Physicians ; Perception
    Language English
    Publishing date 2023-10-17
    Publishing country United States
    Document type Journal Article
    ZDB-ID 80345-5
    ISSN 1536-0210 ; 0020-9996
    ISSN (online) 1536-0210
    ISSN 0020-9996
    DOI 10.1097/RLI.0000000000001034
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Radiological age assessment based on clavicle ossification in CT: enhanced accuracy through deep learning.

    Wesp, Philipp / Schachtner, Balthasar Maria / Jeblick, Katharina / Topalis, Johanna / Weber, Marvin / Fischer, Florian / Penning, Randolph / Ricke, Jens / Ingrisch, Michael / Sabel, Bastian Oliver

    International journal of legal medicine

    2024  

    Abstract: Background: Radiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. To overcome this limitation, we present a deep learning approach for continuous age ... ...

    Abstract Background: Radiological age assessment using reference studies is inherently limited in accuracy due to a finite number of assignable skeletal maturation stages. To overcome this limitation, we present a deep learning approach for continuous age assessment based on clavicle ossification in computed tomography (CT).
    Methods: Thoracic CT scans were retrospectively collected from the picture archiving and communication system. Individuals aged 15.0 to 30.0 years examined in routine clinical practice were included. All scans were automatically cropped around the medial clavicular epiphyseal cartilages. A deep learning model was trained to predict a person's chronological age based on these scans. Performance was evaluated using mean absolute error (MAE). Model performance was compared to an optimistic human reader performance estimate for an established reference study method.
    Results: The deep learning model was trained on 4,400 scans of 1,935 patients (training set: mean age = 24.2 years ± 4.0, 1132 female) and evaluated on 300 scans of 300 patients with a balanced age and sex distribution (test set: mean age = 22.5 years ± 4.4, 150 female). Model MAE was 1.65 years, and the highest absolute error was 6.40 years for females and 7.32 years for males. However, performance could be attributed to norm-variants or pathologic disorders. Human reader estimate MAE was 1.84 years and the highest absolute error was 3.40 years for females and 3.78 years for males.
    Conclusions: We present a deep learning approach for continuous age predictions using CT volumes highlighting the medial clavicular epiphyseal cartilage with performance comparable to the human reader estimate.
    Language English
    Publishing date 2024-01-30
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 1055109-8
    ISSN 1437-1596 ; 0937-9827
    ISSN (online) 1437-1596
    ISSN 0937-9827
    DOI 10.1007/s00414-024-03167-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top