LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 4 of total 4

Search options

  1. Article ; Online: CXR-Net: A Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia From Chest X-Ray Images.

    Zhang, Xin / Han, Liangxiu / Sobeih, Tam / Han, Lianghao / Dempsey, Nina / Lechareas, Symeon / Tridente, Ascanio / Chen, Haoming / White, Stephen / Zhang, Daoqiang

    IEEE journal of biomedical and health informatics

    2023  Volume 27, Issue 2, Page(s) 980–991

    Abstract: Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first-line imaging technique for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Currently, many deep learning ( ...

    Abstract Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first-line imaging technique for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Currently, many deep learning (DL) models have been proposed to detect COVID-19 pneumonia from CXR images. Unfortunately, these deep classifiers lack the transparency in interpreting findings, which may limit their applications in clinical practice. The existing explanation methods produce either too noisy or imprecise results, and hence are unsuitable for diagnostic purposes. In this work, we propose a novel explainable CXR deep neural Network (CXR-Net) for accurate COVID-19 pneumonia detection with an enhanced pixel-level visual explanation using CXR images. An Encoder-Decoder-Encoder architecture is proposed, in which an extra encoder is added after the encoder-decoder structure to ensure the model can be trained on category samples. The method has been evaluated on real world CXR datasets from both public and private sources, including healthy, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia cases. The results demonstrate that the proposed method can achieve a satisfactory accuracy and provide fine-resolution activation maps for visual explanation in the lung disease detection. Compared to current state-of-the-art visual explanation methods, the proposed method can provide more detailed, high-resolution, visual explanation for the classification results. It can be deployed in various computing environments, including cloud, CPU and GPU environments. It has a great potential to be used in clinical practice for COVID-19 pneumonia diagnosis.
    MeSH term(s) Humans ; COVID-19/diagnostic imaging ; Deep Learning ; X-Rays ; Thorax/diagnostic imaging ; Pneumonia, Viral/diagnostic imaging ; COVID-19 Testing
    Language English
    Publishing date 2023-02-03
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2695320-1
    ISSN 2168-2208 ; 2168-2194
    ISSN (online) 2168-2208
    ISSN 2168-2194
    DOI 10.1109/JBHI.2022.3220813
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article: The Self-Supervised Spectral–Spatial Vision Transformer Network for Accurate Prediction of Wheat Nitrogen Status from UAV Imagery

    Zhang, Xin / Han, Liangxiu / Sobeih, Tam / Lappin, Lewis / Lee, Mark A. / Howard, Andew / Kisdi, Aron

    Remote Sensing. 2022 Mar. 14, v. 14, no. 6

    2022  

    Abstract: Nitrogen (N) fertilizer is routinely applied by farmers to increase crop yields. At present, farmers often over-apply N fertilizer in some locations or at certain times because they do not have high-resolution crop N status data. N-use efficiency can be ... ...

    Abstract Nitrogen (N) fertilizer is routinely applied by farmers to increase crop yields. At present, farmers often over-apply N fertilizer in some locations or at certain times because they do not have high-resolution crop N status data. N-use efficiency can be low, with the remaining N lost to the environment, resulting in higher production costs and environmental pollution. Accurate and timely estimation of N status in crops is crucial to improving cropping systems’ economic and environmental sustainability. Destructive approaches based on plant tissue analysis are time consuming and impractical over large fields. Recent advances in remote sensing and deep learning have shown promise in addressing the aforementioned challenges in a non-destructive way. In this work, we propose a novel deep learning framework: a self-supervised spectral–spatial attention-based vision transformer (SSVT). The proposed SSVT introduces a Spectral Attention Block (SAB) and a Spatial Interaction Block (SIB), which allows for simultaneous learning of both spatial and spectral features from UAV digital aerial imagery, for accurate N status prediction in wheat fields. Moreover, the proposed framework introduces local-to-global self-supervised learning to help train the model from unlabelled data. The proposed SSVT has been compared with five state-of-the-art models including: ResNet, RegNet, EfficientNet, EfficientNetV2, and the original vision transformer on both testing and independent datasets. The proposed approach achieved high accuracy (0.96) with good generalizability and reproducibility for wheat N status estimation.
    Keywords data collection ; environmental sustainability ; models ; nitrogen ; nitrogen fertilizers ; nutrient use efficiency ; plant tissues ; pollution ; prediction ; remote sensing ; tissue analysis ; vision ; wheat
    Language English
    Dates of publication 2022-0314
    Publishing place Multidisciplinary Digital Publishing Institute
    Document type Article
    ZDB-ID 2513863-7
    ISSN 2072-4292
    ISSN 2072-4292
    DOI 10.3390/rs14061400
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

  3. Book ; Online: CXR-Net

    Zhang, Xin / Han, Liangxiu / Sobeih, Tam / Han, Lianghao / Dempsey, Nina / Lechareas, Symeon / Tridente, Ascanio / Chen, Haoming / White, Stephen

    An Encoder-Decoder-Encoder Multitask Deep Neural Network for Explainable and Accurate Diagnosis of COVID-19 pneumonia with Chest X-ray Images

    2021  

    Abstract: Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first line imaging test for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Inspired by the success of deep ... ...

    Abstract Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first line imaging test for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Inspired by the success of deep learning (DL) in computer vision, many DL-models have been proposed to detect COVID-19 pneumonia using CXR images. Unfortunately, these deep classifiers lack the transparency in interpreting findings, which may limit their applications in clinical practice. The existing commonly used visual explanation methods are either too noisy or imprecise, with low resolution, and hence are unsuitable for diagnostic purposes. In this work, we propose a novel explainable deep learning framework (CXRNet) for accurate COVID-19 pneumonia detection with an enhanced pixel-level visual explanation from CXR images. The proposed framework is based on a new Encoder-Decoder-Encoder multitask architecture, allowing for both disease classification and visual explanation. The method has been evaluated on real world CXR datasets from both public and private data sources, including: healthy, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia cases The experimental results demonstrate that the proposed method can achieve a satisfactory level of accuracy and provide fine-resolution classification activation maps for visual explanation in lung disease detection. The Average Accuracy, the Precision, Recall and F1-score of COVID-19 pneumonia reached 0.879, 0.985, 0.992 and 0.989, respectively. We have also found that using lung segmented (CXR) images can help improve the performance of the model. The proposed method can provide more detailed high resolution visual explanation for the classification decision, compared to current state-of-the-art visual explanation methods and has a great potential to be used in clinical practice for COVID-19 pneumonia diagnosis.
    Keywords Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2021-10-20
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article: A Deep Learning-Based Approach for Automated Yellow Rust Disease Detection from High-Resolution Hyperspectral UAV Images

    Zhang, Xin / Han, Liangxiu / Dong, Yingying / Shi, Yue / Huang, Wenjiang / Han, Lianghao / González-Moreno, Pablo / Ma, Huiqin / Ye, Huichun / Sobeih, Tam

    Remote Sensing. 2019 June 29, v. 11, no. 13

    2019  

    Abstract: Yellow rust in winter wheat is a widespread and serious fungal disease, resulting in significant yield losses globally. Effective monitoring and accurate detection of yellow rust are crucial to ensure stable and reliable wheat production and food ... ...

    Abstract Yellow rust in winter wheat is a widespread and serious fungal disease, resulting in significant yield losses globally. Effective monitoring and accurate detection of yellow rust are crucial to ensure stable and reliable wheat production and food security. The existing standard methods often rely on manual inspection of disease symptoms in a small crop area by agronomists or trained surveyors. This is costly, time consuming and prone to error due to the subjectivity of surveyors. Recent advances in unmanned aerial vehicles (UAVs) mounted with hyperspectral image sensors have the potential to address these issues with low cost and high efficiency. This work proposed a new deep convolutional neural network (DCNN) based approach for automated crop disease detection using very high spatial resolution hyperspectral images captured with UAVs. The proposed model introduced multiple Inception-Resnet layers for feature extraction and was optimized to establish the most suitable depth and width of the network. Benefiting from the ability of convolution layers to handle three-dimensional data, the model used both spatial and spectral information for yellow rust detection. The model was calibrated with hyperspectral imagery collected by UAVs in five different dates across a whole crop cycle over a well-controlled field experiment with healthy and rust infected wheat plots. Its performance was compared across sampling dates and with random forest, a representative of traditional classification methods in which only spectral information was used. It was found that the method has high performance across all the growing cycle, particularly at late stages of the disease spread. The overall accuracy of the proposed model (0.85) was higher than that of the random forest classifier (0.77). These results showed that combining both spectral and spatial information is a suitable approach to improving the accuracy of crop disease detection with high resolution UAV hyperspectral images.
    Keywords agronomists ; automation ; crop production ; disease detection ; field experimentation ; food security ; fungi ; hyperspectral imagery ; models ; monitoring ; remote sensing ; spatial data ; stripe rust ; unmanned aerial vehicles ; winter wheat
    Language English
    Dates of publication 2019-0629
    Publishing place Multidisciplinary Digital Publishing Institute
    Document type Article
    ZDB-ID 2513863-7
    ISSN 2072-4292
    ISSN 2072-4292
    DOI 10.3390/rs11131554
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

To top