LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 18

Search options

  1. Article ; Online: PCNet: Prior Category Network for CT Universal Segmentation Model.

    Chen, Yixin / Gao, Yajuan / Zhu, Lei / Shao, Wenrui / Lu, Yanye / Han, Hongbin / Xie, Zhaoheng

    IEEE transactions on medical imaging

    2024  Volume PP

    Abstract: Accurate segmentation of anatomical structures in Computed Tomography (CT) images is crucial for clinical diagnosis, treatment planning, and disease monitoring. The present deep learning segmentation methods are hindered by factors such as data scale and ...

    Abstract Accurate segmentation of anatomical structures in Computed Tomography (CT) images is crucial for clinical diagnosis, treatment planning, and disease monitoring. The present deep learning segmentation methods are hindered by factors such as data scale and model size. Inspired by how doctors identify tissues, we propose a novel approach, the Prior Category Network (PCNet), that boosts segmentation performance by leveraging prior knowledge between different categories of anatomical structures. Our PCNet comprises three key components: prior category prompt (PCP), hierarchy category system (HCS), and hierarchy category loss (HCL). PCP utilizes Contrastive Language-Image Pretraining (CLIP), along with attention modules, to systematically define the relationships between anatomical categories as identified by clinicians. HCS guides the segmentation model in distinguishing between specific organs, anatomical structures, and functional systems through hierarchical relationships. HCL serves as a consistency constraint, fortifying the directional guidance provided by HCS to enhance the segmentation model's accuracy and robustness. We conducted extensive experiments to validate the effectiveness of our approach, and the results indicate that PCNet can generate a high-performance, universal model for CT segmentation. The PCNet framework also demonstrates a significant transferability on multiple downstream tasks. The ablation experiments show that the methodology employed in constructing the HCS is of critical importance. The prompt and HCS can be accessed at https://github.com/YixinChen-AI/PCNet.
    Language English
    Publishing date 2024-04-30
    Publishing country United States
    Document type Journal Article
    ZDB-ID 622531-7
    ISSN 1558-254X ; 0278-0062
    ISSN (online) 1558-254X
    ISSN 0278-0062
    DOI 10.1109/TMI.2024.3395349
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Suppressing label noise in medical image classification using mixup attention and self-supervised learning.

    Gao, Mengdi / Jiang, Hongyang / Hu, Yan / Ren, Qiushi / Xie, Zhaoheng / Liu, Jiang

    Physics in medicine and biology

    2024  Volume 69, Issue 10

    Abstract: Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is ... ...

    Abstract Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise a noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group mixup attention strategies into vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group mixup attention module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and mixup attention can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.
    MeSH term(s) Supervised Machine Learning ; Image Processing, Computer-Assisted/methods ; Humans ; Signal-To-Noise Ratio ; Neural Networks, Computer ; Deep Learning ; Diagnostic Imaging
    Language English
    Publishing date 2024-05-08
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 208857-5
    ISSN 1361-6560 ; 0031-9155
    ISSN (online) 1361-6560
    ISSN 0031-9155
    DOI 10.1088/1361-6560/ad4083
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Unsupervised deep learning framework for data-driven gating in positron emission tomography.

    Li, Tiantian / Xie, Zhaoheng / Qi, Wenyuan / Asma, Evren / Qi, Jinyi

    Medical physics

    2023  Volume 50, Issue 10, Page(s) 6047–6059

    Abstract: Background: Physiological motion, such as respiratory motion, has become a limiting factor in the spatial resolution of positron emission tomography (PET) imaging as the resolution of PET detectors continue to improve. Motion-induced misregistration ... ...

    Abstract Background: Physiological motion, such as respiratory motion, has become a limiting factor in the spatial resolution of positron emission tomography (PET) imaging as the resolution of PET detectors continue to improve. Motion-induced misregistration between PET and CT images can also cause attenuation correction artifacts. Respiratory gating can be used to freeze the motion and to reduce motion induced artifacts.
    Purpose: In this study, we propose a robust data-driven approach using an unsupervised deep clustering network that employs an autoencoder (AE) to extract latent features for respiratory gating.
    Methods: We first divide list-mode PET data into short-time frames. The short-time frame images are reconstructed without attenuation, scatter, or randoms correction to avoid attenuation mismatch artifacts and to reduce image reconstruction time. The deep AE is then trained using reconstructed short-time frame images to extract latent features for respiratory gating. No additional data are required for the AE training. K-means clustering is subsequently used to perform respiratory gating based on the latent features extracted by the deep AE. The effectiveness of our proposed Deep Clustering method was evaluated using physical phantom and real patient datasets. The performance was compared against phase gating based on an external signal (External) and image based principal component analysis (PCA) with K-means clustering (Image PCA).
    Results: The proposed method produced gated images with higher contrast and sharper myocardium boundaries than those obtained using the External gating method and Image PCA. Quantitatively, the gated images generated by the proposed Deep Clustering method showed larger center of mass (COM) displacement and higher lesion contrast than those obtained using the other two methods.
    Conclusions: The effectiveness of our proposed method was validated using physical phantom and real patient data. The results showed our proposed framework could provide superior gating than the conventional External method and Image PCA.
    Language English
    Publishing date 2023-08-04
    Publishing country United States
    Document type Journal Article
    ZDB-ID 188780-4
    ISSN 2473-4209 ; 0094-2405
    ISSN (online) 2473-4209
    ISSN 0094-2405
    DOI 10.1002/mp.16642
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: FP-PET

    Chen, Yixin / Fu, Ourui / Shao, Wenrui / Xie, Zhaoheng

    Large Model, Multiple Loss And Focused Practice

    2023  

    Abstract: This study presents FP-PET, a comprehensive approach to medical image segmentation with a focus on CT and PET images. Utilizing a dataset from the AutoPet2023 Challenge, the research employs a variety of machine learning models, including STUNet-large, ... ...

    Abstract This study presents FP-PET, a comprehensive approach to medical image segmentation with a focus on CT and PET images. Utilizing a dataset from the AutoPet2023 Challenge, the research employs a variety of machine learning models, including STUNet-large, SwinUNETR, and VNet, to achieve state-of-the-art segmentation performance. The paper introduces an aggregated score that combines multiple evaluation metrics such as Dice score, false positive volume (FPV), and false negative volume (FNV) to provide a holistic measure of model effectiveness. The study also discusses the computational challenges and solutions related to model training, which was conducted on high-performance GPUs. Preprocessing and postprocessing techniques, including gaussian weighting schemes and morphological operations, are explored to further refine the segmentation output. The research offers valuable insights into the challenges and solutions for advanced medical image segmentation.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-09-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Ultrasonic Image's Annotation Removal

    Zhang, Yuanheng / Jiang, Nan / Xie, Zhaoheng / Cao, Junying / Teng, Yueyang

    A Self-supervised Noise2Noise Approach

    2023  

    Abstract: Accurately annotated ultrasonic images are vital components of a high-quality medical report. Hospitals often have strict guidelines on the types of annotations that should appear on imaging results. However, manually inspecting these images can be a ... ...

    Abstract Accurately annotated ultrasonic images are vital components of a high-quality medical report. Hospitals often have strict guidelines on the types of annotations that should appear on imaging results. However, manually inspecting these images can be a cumbersome task. While a neural network could potentially automate the process, training such a model typically requires a dataset of paired input and target images, which in turn involves significant human labour. This study introduces an automated approach for detecting annotations in images. This is achieved by treating the annotations as noise, creating a self-supervised pretext task and using a model trained under the Noise2Noise scheme to restore the image to a clean state. We tested a variety of model structures on the denoising task against different types of annotation, including body marker annotation, radial line annotation, etc. Our results demonstrate that most models trained under the Noise2Noise scheme outperformed their counterparts trained with noisy-clean data pairs. The costumed U-Net yielded the most optimal outcome on the body marker annotation dataset, with high scores on segmentation precision and reconstruction similarity. We released our code at https://github.com/GrandArth/UltrasonicImage-N2N-Approach.

    Comment: 10 pages, 7 figures
    Keywords Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-07-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article: Predicting ischemic stroke risk from atrial fibrillation based on multi-spectral fundus images using deep learning.

    Li, Hui / Gao, Mengdi / Song, Haiqing / Wu, Xiao / Li, Gang / Cui, Yiwei / Li, Yang / Xie, Zhaoheng / Ren, Qiushi / Zhang, Haitao

    Frontiers in cardiovascular medicine

    2023  Volume 10, Page(s) 1185890

    Abstract: Background: Ischemic stroke (IS) is one of the most common serious secondary diseases of atrial fibrillation (AF) within 1 year after its occurrence, both of which have manifestations of ischemia and hypoxia of the small vessels in the early phase of ... ...

    Abstract Background: Ischemic stroke (IS) is one of the most common serious secondary diseases of atrial fibrillation (AF) within 1 year after its occurrence, both of which have manifestations of ischemia and hypoxia of the small vessels in the early phase of the condition. The fundus is a collection of capillaries, while the retina responds differently to light of different wavelengths. Predicting the risk of IS occurring secondary to AF, based on subtle differences in fundus images of different wavelengths, is yet to be explored. This study was conducted to predict the risk of IS occurring secondary to AF based on multi-spectrum fundus images using deep learning.
    Methods: A total of 150 AF participants without suffering from IS within 1 year after discharge and 100 IS participants with persistent arrhythmia symptoms or a history of AF diagnosis in the last year (defined as patients who would develop IS within 1 year after AF, based on fundus pathological manifestations generally prior to symptoms of the brain) were recruited. Fundus images at 548, 605, and 810 nm wavelengths were collected. Three classical deep neural network (DNN) models (Inception V3, ResNet50, SE50) were trained. Sociodemographic and selected routine clinical data were obtained.
    Results: The accuracy of all DNNs with the single-spectral or multi-spectral combination images at the three wavelengths as input reached above 78%. The IS detection performance of DNNs with 605 nm spectral images as input was relatively more stable than with the other wavelengths. The multi-spectral combination models acquired a higher area under the curve (AUC) scores than the single-spectral models.
    Conclusions: The probability of IS secondary to AF could be predicted based on multi-spectrum fundus images using deep learning, and combinations of multi-spectrum images improved the performance of DNNs. Acquiring different spectral fundus images is advantageous for the early prevention of cardiovascular and cerebrovascular diseases. The method in this study is a beneficial preliminary and initiative exploration for diseases that are difficult to predict the onset time such as IS.
    Language English
    Publishing date 2023-08-01
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2781496-8
    ISSN 2297-055X
    ISSN 2297-055X
    DOI 10.3389/fcvm.2023.1185890
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Label-noise-tolerant medical image classification via self-attention and self-supervised learning

    Jiang, Hongyang / Gao, Mengdi / Hu, Yan / Ren, Qiushi / Xie, Zhaoheng / Liu, Jiang

    2023  

    Abstract: Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is ... ...

    Abstract Deep neural networks (DNNs) have been widely applied in medical image classification and achieve remarkable classification performance. These achievements heavily depend on large-scale accurately annotated training data. However, label noise is inevitably introduced in the medical image annotation, as the labeling process heavily relies on the expertise and experience of annotators. Meanwhile, DNNs suffer from overfitting noisy labels, degrading the performance of models. Therefore, in this work, we innovatively devise noise-robust training approach to mitigate the adverse effects of noisy labels in medical image classification. Specifically, we incorporate contrastive learning and intra-group attention mixup strategies into the vanilla supervised learning. The contrastive learning for feature extractor helps to enhance visual representation of DNNs. The intra-group attention mixup module constructs groups and assigns self-attention weights for group-wise samples, and subsequently interpolates massive noisy-suppressed samples through weighted mixup operation. We conduct comparative experiments on both synthetic and real-world noisy medical datasets under various noise levels. Rigorous experiments validate that our noise-robust method with contrastive learning and attention mixup can effectively handle with label noise, and is superior to state-of-the-art methods. An ablation study also shows that both components contribute to boost model performance. The proposed method demonstrates its capability of curb label noise and has certain potential toward real-world clinic applications.

    Comment: 11pages, 8 figures
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2023-06-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: Feasibility of PET-enabled dual-energy CT imaging: First physical phantom and patient results.

    Zhu, Yansong / Li, Siqi / Xie, Zhaoheng / Leung, Edwin K / Bayerlein, Reimund / Omidvari, Negar / Cherry, Simon R / Qi, Jinyi / Badawi, Ramsey D / Spencer, Benjamin A / Wang, Guobao

    ArXiv

    2024  

    Abstract: X-ray computed tomography (CT) in PET/CT is commonly operated with a single energy, resulting in a limitation of lacking tissue composition information. Dual-energy (DE) spectral CT enables material decomposition by using two different x-ray energies and ...

    Abstract X-ray computed tomography (CT) in PET/CT is commonly operated with a single energy, resulting in a limitation of lacking tissue composition information. Dual-energy (DE) spectral CT enables material decomposition by using two different x-ray energies and may be combined with PET for improved multimodality imaging, but would either require hardware upgrade or increase radiation dose due to the added second x-ray CT scan. Recently proposed PET-enabled DECT method allows dual-energy spectral imaging using a conventional PET/CT scanner without the need for a second x-ray CT scan. A gamma-ray CT (gCT) image at 511 keV can be generated from the existing time-of-flight PET data with the maximum-likelihood attenuation and activity (MLAA) approach and is then combined with the low-energy x-ray CT image to form dual-energy spectral imaging. To improve the image quality of gCT, a kernel MLAA method was further proposed by incorporating x-ray CT as a priori information. The concept of this PET-enabled DECT has been validated using simulation studies, but not yet with 3D real data. In this work, we developed a general open-source implementation for gCT reconstruction from PET data and use this implementation for the first real data validation with both a physical phantom study and a human subject study on a uEXPLORER total-body PET/CT system. These results have demonstrated the feasibility of this method for spectral imaging and material decomposition.
    Language English
    Publishing date 2024-04-12
    Publishing country United States
    Document type Preprint
    ISSN 2331-8422
    ISSN (online) 2331-8422
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Anatomically aided PET image reconstruction using deep neural networks.

    Xie, Zhaoheng / Li, Tiantian / Zhang, Xuezhu / Qi, Wenyuan / Asma, Evren / Qi, Jinyi

    Medical physics

    2021  Volume 48, Issue 9, Page(s) 5244–5258

    Abstract: Purpose: The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to ... ...

    Abstract Purpose: The developments of PET/CT and PET/MR scanners provide opportunities for improving PET image quality by using anatomical information. In this paper, we propose a novel co-learning three-dimensional (3D) convolutional neural network (CNN) to extract modality-specific features from PET/CT image pairs and integrate complementary features into an iterative reconstruction framework to improve PET image reconstruction.
    Methods: We used a pretrained deep neural network to represent PET images. The network was trained using low-count PET and CT image pairs as inputs and high-count PET images as labels. This network was then incorporated into a constrained maximum likelihood framework to regularize PET image reconstruction. Two different network structures were investigated for the integration of anatomical information from CT images. One was a multichannel CNN, which treated PET and CT volumes as separate channels of the input. The other one was multibranch CNN, which implemented separate encoders for PET and CT images to extract latent features and fed the combined latent features into a decoder. Using computer-based Monte Carlo simulations and two real patient datasets, the proposed method has been compared with existing methods, including the maximum likelihood expectation maximization (MLEM) reconstruction, a kernel-based reconstruction and a CNN-based deep penalty method with and without anatomical guidance.
    Results: Reconstructed images showed that the proposed constrained ML reconstruction approach produced higher quality images than the competing methods. The tumors in the lung region have higher contrast in the proposed constrained ML reconstruction than in the CNN-based deep penalty reconstruction. The image quality was further improved by incorporating the anatomical information. Moreover, the liver standard deviation was lower in the proposed approach than all the competing methods at a matched lesion contrast.
    Conclusions: The supervised co-learning strategy can improve the performance of constrained maximum likelihood reconstruction. Compared with existing techniques, the proposed method produced a better lesion contrast versus background standard deviation trade-off curve, which can potentially improve lesion detection.
    MeSH term(s) Humans ; Image Processing, Computer-Assisted ; Neural Networks, Computer ; Positron Emission Tomography Computed Tomography ; Positron-Emission Tomography ; Tomography, X-Ray Computed
    Language English
    Publishing date 2021-07-28
    Publishing country United States
    Document type Journal Article
    ZDB-ID 188780-4
    ISSN 2473-4209 ; 0094-2405
    ISSN (online) 2473-4209
    ISSN 0094-2405
    DOI 10.1002/mp.15051
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article: A Hybrid Imaging Platform(CT/PET/FMI) for Evaluating Tumor Necrosis and Apoptosis in Real-Time.

    Kang, Yulin / Zhai, Xiaohui / Lu, Sifen / Vuletic, Ivan / Wang, Lin / Zhou, Kun / Peng, Zhiqiang / Ren, Qiushi / Xie, Zhaoheng

    Frontiers in oncology

    2022  Volume 12, Page(s) 772392

    Abstract: Multimodality imaging is an advanced imaging tool for monitoring tumor behavior and ... ...

    Abstract Multimodality imaging is an advanced imaging tool for monitoring tumor behavior and therapy
    Language English
    Publishing date 2022-06-22
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2649216-7
    ISSN 2234-943X
    ISSN 2234-943X
    DOI 10.3389/fonc.2022.772392
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top