LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Emami, Hajar"
  2. AU="Cimino, R."
  3. AU="Judith R. Stabel"
  4. AU="Takeuchi, Kazuto"
  5. AU="Mirzaei, Samira"
  6. AU="Carolina Salgado"
  7. AU="Mate, Sebastian"
  8. AU="Hou, Tian-Yang Liu"
  9. AU=Nino Gustavo
  10. AU="Lydon, Myra"
  11. AU="Jain, Nibha"
  12. AU="David A Schwartz"
  13. AU="Swart, Jonathan"
  14. AU="Karol, Agnieszka"
  15. AU="Reilly, Brittni"
  16. AU="Arfaatabar, Maryam"
  17. AU="Kumar Pandey, Anand"

Suchergebnis

Treffer 1 - 8 von insgesamt 8

Suchoptionen

  1. Buch ; Online: Modality-aware Transformer for Time series Forecasting

    Emami, Hajar / Dang, Xuan-Hong / Shah, Yousaf / Zerfos, Petros

    2023  

    Abstract: Time series forecasting presents a significant challenge, particularly when its accuracy relies on external data sources rather than solely on historical values. This issue is prevalent in the financial sector, where the future behavior of time series is ...

    Abstract Time series forecasting presents a significant challenge, particularly when its accuracy relies on external data sources rather than solely on historical values. This issue is prevalent in the financial sector, where the future behavior of time series is often intricately linked to information derived from various textual reports and a multitude of economic indicators. In practice, the key challenge lies in constructing a reliable time series forecasting model capable of harnessing data from diverse sources and extracting valuable insights to predict the target time series accurately. In this work, we tackle this challenging problem and introduce a novel multimodal transformer-based model named the Modality-aware Transformer. Our model excels in exploring the power of both categorical text and numerical timeseries to forecast the target time series effectively while providing insights through its neural attention mechanism. To achieve this, we develop feature-level attention layers that encourage the model to focus on the most relevant features within each data modality. By incorporating the proposed feature-level attention, we develop a novel Intra-modal multi-head attention (MHA), Inter-modal MHA and Modality-target MHA in a way that both feature and temporal attentions are incorporated in MHAs. This enables the MHAs to generate temporal attentions with consideration of modality and feature importance which leads to more informative embeddings. The proposed modality-aware structure enables the model to effectively exploit information within each modality as well as foster cross-modal understanding. Our extensive experiments on financial datasets demonstrate that Modality-aware Transformer outperforms existing methods, offering a novel and practical solution to the complex challenges of multi-modality time series forecasting.
    Schlagwörter Computer Science - Machine Learning
    Thema/Rubrik (Code) 330
    Erscheinungsdatum 2023-10-02
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  2. Artikel: Attention-Guided Generative Adversarial Network to Address Atypical Anatomy in Synthetic CT Generation.

    Emami, Hajar / Dong, Ming / Glide-Hurst, Carri K

    2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science : IRI 2020 : proceedings : virtual conference, 11-13 August 2020. IEEE International Conference on Information Reuse and Integration (21st : 2...

    2020  Band 2020, Seite(n) 188–193

    Abstract: Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we ... ...

    Abstract Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we propose a novel spatial attention-guided generative adversarial network (attention-GAN) model to generate accurate synCTs using T1-weighted MRI images as the input to address atypical anatomy. Experimental results on fifteen brain cancer patients show that attention-GAN outperformed existing synCT models and achieved an average MAE of 85.223±12.08, 232.41±60.86, 246.38±42.67 Hounsfield units between synCT and CT-SIM across the entire head, bone and air regions, respectively. Qualitative analysis shows that attention-GAN has the ability to use spatially focused areas to better handle outliers, areas with complex anatomy or post-surgical regions, and thus offer strong potential for supporting near real-time MR-only treatment planning.
    Sprache Englisch
    Erscheinungsdatum 2020-09-10
    Erscheinungsland United States
    Dokumenttyp Journal Article
    DOI 10.1109/iri49571.2020.00034
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Buch ; Online: Attention-Guided Generative Adversarial Network to Address Atypical Anatomy in Modality Transfer

    Emami, Hajar / Dong, Ming / Glide-Hurst, Carri K.

    2020  

    Abstract: Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we ... ...

    Abstract Recently, interest in MR-only treatment planning using synthetic CTs (synCTs) has grown rapidly in radiation therapy. However, developing class solutions for medical images that contain atypical anatomy remains a major limitation. In this paper, we propose a novel spatial attention-guided generative adversarial network (attention-GAN) model to generate accurate synCTs using T1-weighted MRI images as the input to address atypical anatomy. Experimental results on fifteen brain cancer patients show that attention-GAN outperformed existing synCT models and achieved an average MAE of 85.22$\pm$12.08, 232.41$\pm$60.86, 246.38$\pm$42.67 Hounsfield units between synCT and CT-SIM across the entire head, bone and air regions, respectively. Qualitative analysis shows that attention-GAN has the ability to use spatially focused areas to better handle outliers, areas with complex anatomy or post-surgical regions, and thus offer strong potential for supporting near real-time MR-only treatment planning.

    Comment: IEEE 21st International Conference on Information Reuse and Integration for Data Science
    Schlagwörter Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Thema/Rubrik (Code) 004
    Erscheinungsdatum 2020-06-26
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: FREA-Unet

    Emami, Hajar / Liu, Qiong / Dong, Ming

    Frequency-aware U-net for Modality Transfer

    2020  

    Abstract: While Positron emission tomography (PET) imaging has been widely used in diagnosis of number of diseases, it has costly acquisition process which involves radiation exposure to patients. However, magnetic resonance imaging (MRI) is a safer imaging ... ...

    Abstract While Positron emission tomography (PET) imaging has been widely used in diagnosis of number of diseases, it has costly acquisition process which involves radiation exposure to patients. However, magnetic resonance imaging (MRI) is a safer imaging modality that does not involve patient's exposure to radiation. Therefore, a need exists for an efficient and automated PET image generation from MRI data. In this paper, we propose a new frequency-aware attention U-net for generating synthetic PET images. Specifically, we incorporate attention mechanism into different U-net layers responsible for estimating low/high frequency scales of the image. Our frequency-aware attention Unet computes the attention scores for feature maps in low/high frequency layers and use it to help the model focus more on the most important regions, leading to more realistic output images. Experimental results on 30 subjects from Alzheimers Disease Neuroimaging Initiative (ADNI) dataset demonstrate good performance of the proposed model in PET image synthesis that achieved superior performance, both qualitative and quantitative, over current state-of-the-arts.
    Schlagwörter Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2020-12-30
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Buch ; Online: SA-GAN

    Emami, Hajar / Dong, Ming / Nejad-Davarani, Siamak / Glide-Hurst, Carri

    Structure-Aware GAN for Organ-Preserving Synthetic CT Generation

    2021  

    Abstract: In medical image synthesis, model training could be challenging due to the inconsistencies between images of different modalities even with the same patient, typically caused by internal status/tissue changes as different modalities are usually obtained ... ...

    Abstract In medical image synthesis, model training could be challenging due to the inconsistencies between images of different modalities even with the same patient, typically caused by internal status/tissue changes as different modalities are usually obtained at a different time. This paper proposes a novel deep learning method, Structure-aware Generative Adversarial Network (SA-GAN), that preserves the shapes and locations of in-consistent structures when generating medical images. SA-GAN is employed to generate synthetic computed tomography (synCT) images from magnetic resonance imaging (MRI) with two parallel streams: the global stream translates the input from the MRI to the CT domain while the local stream automatically segments the inconsistent organs, maintains their locations and shapes in MRI, and translates the organ intensities to CT. Through extensive experiments on a pelvic dataset, we demonstrate that SA-GAN provides clinically acceptable accuracy on both synCTs and organ segmentation and supports MR-only treatment planning in disease sites with internal organ status changes.

    Comment: Accepted to MICCAI 2021
    Schlagwörter Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Thema/Rubrik (Code) 004
    Erscheinungsdatum 2021-05-14
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  6. Artikel ; Online: Performance of deep learning synthetic CTs for MR-only brain radiation therapy.

    Liu, Xiaoning / Emami, Hajar / Nejad-Davarani, Siamak P / Morris, Eric / Schultz, Lonni / Dong, Ming / K Glide-Hurst, Carri

    Journal of applied clinical medical physics

    2021  Band 22, Heft 1, Seite(n) 308–317

    Abstract: Purpose: To evaluate the dosimetric and image-guided radiation therapy (IGRT) performance of a novel generative adversarial network (GAN) generated synthetic CT (synCT) in the brain and compare its performance for clinical use including conventional ... ...

    Abstract Purpose: To evaluate the dosimetric and image-guided radiation therapy (IGRT) performance of a novel generative adversarial network (GAN) generated synthetic CT (synCT) in the brain and compare its performance for clinical use including conventional brain radiotherapy, cranial stereotactic radiosurgery (SRS), planar, and volumetric IGRT.
    Methods and materials: SynCT images for 12 brain cancer patients (6 SRS, 6 conventional) were generated from T1-weighted postgadolinium magnetic resonance (MR) images by applying a GAN model with a residual network (ResNet) generator and a convolutional neural network (CNN) with 5 convolutional layers as the discriminator that classified input images as real or synthetic. Following rigid registration, clinical structures and treatment plans derived from simulation CT (simCT) images were transferred to synCTs. Dose was recalculated for 15 simCT/synCT plan pairs using fixed monitor units. Two-dimensional (2D) gamma analysis (2%/2 mm, 1%/1 mm) was performed to compare dose distributions at isocenter. Dose-volume histogram (DVH) metrics (D
    Results: Average gamma passing rates at 1%/1mm and 2%/2mm were 99.0 ± 1.5% and 99.9 ± 0.2%, respectively. Excellent agreement in DVH metrics was observed (mean difference ≤0.10 ± 0.04 Gy for targets, 0.13 ± 0.04 Gy for OARs). The population averaged mean difference in CBCT-synCT registrations were <0.2 mm and 0.1 degree different from simCT-based registrations. The mean difference between kV-synCT DRR and kV-simCT DRR registrations was <0.5 mm with no statistically significant differences observed (P > 0.05). An outlier with a large resection cavity exhibited the worst-case scenario.
    Conclusion: Brain GAN synCTs demonstrated excellent performance for dosimetric and IGRT endpoints, offering potential use in high precision brain cancer therapy.
    Mesh-Begriff(e) Brain/diagnostic imaging ; Brain/surgery ; Deep Learning ; Humans ; Radiotherapy Dosage ; Radiotherapy Planning, Computer-Assisted ; Radiotherapy, Image-Guided
    Sprache Englisch
    Erscheinungsdatum 2021-01-07
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 2010347-5
    ISSN 1526-9914 ; 1526-9914
    ISSN (online) 1526-9914
    ISSN 1526-9914
    DOI 10.1002/acm2.13139
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  7. Buch ; Online: SPA-GAN

    Emami, Hajar / Aliabadi, Majid Moradi / Dong, Ming / Chinnam, Ratna Babu

    Spatial Attention GAN for Image-to-Image Translation

    2019  

    Abstract: Image-to-image translation is to learn a mapping between images from a source domain and images from a target domain. In this paper, we introduce the attention mechanism directly to the generative adversarial network (GAN) architecture and propose a ... ...

    Abstract Image-to-image translation is to learn a mapping between images from a source domain and images from a target domain. In this paper, we introduce the attention mechanism directly to the generative adversarial network (GAN) architecture and propose a novel spatial attention GAN model (SPA-GAN) for image-to-image translation tasks. SPA-GAN computes the attention in its discriminator and use it to help the generator focus more on the most discriminative regions between the source and target domains, leading to more realistic output images. We also find it helpful to introduce an additional feature map loss in SPA-GAN training to preserve domain specific features during translation. Compared with existing attention-guided GAN models, SPA-GAN is a lightweight model that does not need additional attention networks or supervision. Qualitative and quantitative comparison against state-of-the-art methods on benchmark datasets demonstrates the superior performance of SPA-GAN.

    Comment: IEEE Transactions on Multimedia, Digital Object Identifier: 10.1109/TMM.2020.2975961
    Schlagwörter Computer Science - Computer Vision and Pattern Recognition
    Thema/Rubrik (Code) 004
    Erscheinungsdatum 2019-08-19
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: Generating synthetic CTs from magnetic resonance images using generative adversarial networks.

    Emami, Hajar / Dong, Ming / Nejad-Davarani, Siamak P / Glide-Hurst, Carri K

    Medical physics

    2018  

    Abstract: Purpose: While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This ... ...

    Abstract Purpose: While MR-only treatment planning using synthetic CTs (synCTs) offers potential for streamlining clinical workflow, a need exists for an efficient and automated synCT generation in the brain to facilitate near real-time MR-only planning. This work describes a novel method for generating brain synCTs based on generative adversarial networks (GANs), a deep learning model that trains two competing networks simultaneously, and compares it to a deep convolutional neural network (CNN).
    Methods: Post-Gadolinium T1-Weighted and CT-SIM images from fifteen brain cancer patients were retrospectively analyzed. The GAN model was developed to generate synCTs using T1-weighted MRI images as the input using a residual network (ResNet) as the generator. The discriminator is a CNN with five convolutional layers that classified the input image as real or synthetic. Fivefold cross-validation was performed to validate our model. GAN performance was compared to CNN based on mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics between the synCT and CT images.
    Results: GAN training took ~11 h with a new case testing time of 5.7 ± 0.6 s. For GAN, MAEs between synCT and CT-SIM were 89.3 ± 10.3 Hounsfield units (HU) and 41.9 ± 8.6 HU across the entire FOV and tissues, respectively. However, MAE in the bone and air was, on average, ~240-255 HU. By comparison, the CNN model had an average full FOV MAE of 102.4 ± 11.1 HU. For GAN, the mean PSNR was 26.6 ± 1.2 and SSIM was 0.83 ± 0.03. GAN synCTs preserved details better than CNN, and regions of abnormal anatomy were well represented on GAN synCTs.
    Conclusions: We developed and validated a GAN model using a single T1-weighted MR image as the input that generates robust, high quality synCTs in seconds. Our method offers strong potential for supporting near real-time MR-only treatment planning in the brain.
    Sprache Englisch
    Erscheinungsdatum 2018-06-14
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 188780-4
    ISSN 2473-4209 ; 0094-2405
    ISSN (online) 2473-4209
    ISSN 0094-2405
    DOI 10.1002/mp.13047
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang