LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 6279

Search options

  1. Book ; Online: GAN-RXA

    Zhao, Tianyi / Sarkar, Shamik / Krijestorac, Enes / Cabric, Danijela

    A Practical Scalable Solution to Receiver-Agnostic Transmitter Fingerprinting

    2023  

    Abstract: ... feature-extractor. We also propose two deep-learning approaches (SD-RXA and GAN-RXA) in this first stage ... without calibration. Moreover, GAN-RXA can further increase the closed-set classification accuracy by 5.0%, and ...

    Abstract Radio frequency fingerprinting has been proposed for device identification. However, experimental studies also demonstrated its sensitivity to deployment changes. Recent works have addressed channel impacts by developing robust algorithms accounting for time and location variability, but the impacts of receiver impairments on transmitter fingerprints are yet to be solved. In this work, we investigat the receiver-agnostic transmitter fingerprinting problem, and propose a novel two-stage supervised learning framework (RXA) to address it. In the first stage, our approach calibrates a receiver-agnostic transmitter feature-extractor. We also propose two deep-learning approaches (SD-RXA and GAN-RXA) in this first stage to improve the receiver-agnostic property of the RXA framework. In the second stage, the calibrated feature-extractor is utilized to train a transmitter classifier with only one receiver. We evaluate the proposed approaches on transmitter identification problem using a large-scale WiFi dataset. We show that when a trained transmitter-classifier is deployed on new receivers, the RXA framework can improve the classification accuracy by 19.5%, and the outlier detection rate by 10.0% compared to a naive approach without calibration. Moreover, GAN-RXA can further increase the closed-set classification accuracy by 5.0%, and the outlier detection rate by 7.5% compared to the RXA approach.
    Keywords Electrical Engineering and Systems Science - Signal Processing
    Subject code 006
    Publishing date 2023-03-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Article ; Online: DTR-GAN

    Aolin Yang / Tiejun Yang / Xiang Zhao / Xin Zhang / Yanghui Yan / Chunxia Jiao

    Applied Sciences, Vol 14, Iss 1, p

    An Unsupervised Bidirectional Translation Generative Adversarial Network for MRI-CT Registration

    2023  Volume 95

    Abstract: ... we develop an unsupervised multimodal image registration method named DTR-GAN. Firstly, we design ... GAN obtains a competitive performance compared to other methods in MRI-CT registration. Compared ... with DFR, DTR-GAN has not only obtained performance improvements of 2.35% and 2.08% in the dice similarity ...

    Abstract Medical image registration is a fundamental and indispensable element in medical image analysis, which can establish spatial consistency among corresponding anatomical structures across various medical images. Since images with different modalities exhibit different features, it remains a challenge to find their exact correspondence. Most of the current methods based on image-to-image translation cannot fully leverage the available information, which will affect the subsequent registration performance. To solve the problem, we develop an unsupervised multimodal image registration method named DTR-GAN. Firstly, we design a multimodal registration framework via a bidirectional translation network to transform the multimodal image registration into a unimodal registration, which can effectively use the complementary information of different modalities. Then, to enhance the quality of the transformed images in the translation network, we design a multiscale encoder–decoder network that effectively captures both local and global features in images. Finally, we propose a mixed similarity loss to encourage the warped image to be closer to the target image in deep features. We extensively evaluate methods for MRI-CT image registration tasks of the abdominal cavity with advanced unsupervised multimodal image registration approaches. The results indicate that DTR-GAN obtains a competitive performance compared to other methods in MRI-CT registration. Compared with DFR, DTR-GAN has not only obtained performance improvements of 2.35% and 2.08% in the dice similarity coefficient (DSC) of MRI-CT registration and CT-MRI registration on the Learn2Reg dataset but has also decreased the average symmetric surface distance (ASD) by 0.33 mm and 0.12 mm on the Learn2Reg dataset.
    Keywords multimodal image registration ; image-to-image translation ; unsupervised ; deep learning ; Technology ; T ; Engineering (General). Civil engineering (General) ; TA1-2040 ; Biology (General) ; QH301-705.5 ; Physics ; QC1-999 ; Chemistry ; QD1-999
    Subject code 006
    Language English
    Publishing date 2023-12-01T00:00:00Z
    Publisher MDPI AG
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: SA-GAN

    Jiayi Zhao / Yong Ma / Fu Chen / Erping Shang / Wutao Yao / Shuyan Zhang / Jin Yang

    Remote Sensing, Vol 15, Iss 1391, p

    A Second Order Attention Generator Adversarial Network with Region Aware Strategy for Real Satellite Images Super Resolution Reconstruction

    2023  Volume 1391

    Abstract: ... generator adversarial attention network (SA-GAN) model based on real-world remote sensing images is proposed ...

    Abstract High-resolution (HR) remote sensing images have important applications in many scenarios, and improving the resolution of remote sensing images via algorithms is one of the key research fields. However, current super-resolution (SR) algorithms, which are trained on synthetic datasets, tend to have poor performance in real-world low-resolution (LR) images. Moreover, due to the inherent complexity of real-world remote sensing images, current models are prone to color distortion, blurred edges, and unrealistic artifacts. To address these issues, real-SR datasets using the Gao Fen (GF) satellite images at different spatial resolutions have been established to simulate real degradation situations; moreover, a second-order attention generator adversarial attention network (SA-GAN) model based on real-world remote sensing images is proposed to implement the SR task. In the generator network, a second-order channel attention mechanism and a region-level non-local module are used to fully utilize the a priori information in low-resolution (LR) images, as well as adopting region-aware loss to suppress artifact generation. Experiments on test data demonstrate that the model delivers good performance for quantitative metrics, and the visual quality outperforms that of previous approaches. The Frechet inception distance score (FID) and the learned perceptual image patch similarity (LPIPS) value using the proposed method are improved by 17.67% and 6.61%, respectively. Migration experiments in real scenarios also demonstrate the effectiveness and robustness of the method.
    Keywords super-resolution ; region aware ; second-order channel attention ; Gao Fen satellite ; region-level non-local ; Science ; Q
    Subject code 006
    Language English
    Publishing date 2023-03-01T00:00:00Z
    Publisher MDPI AG
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: In-Domain GAN Inversion for Faithful Reconstruction and Editability.

    Zhu, Jiapeng / Shen, Yujun / Xu, Yinghao / Zhao, Deli / Chen, Qifeng / Zhou, Bolei

    IEEE transactions on pattern analysis and machine intelligence

    2024  Volume 46, Issue 5, Page(s) 2607–2621

    Abstract: ... that can adequately recover the input image to edit, which is also known as GAN inversion. To invert a GAN model ... fills in this gap by proposing in-domain GAN inversion, which consists of a domain-guided encoder and ... trained GAN model. In this way, we manage to sufficiently reuse the knowledge learned by GANs ...

    Abstract Generative Adversarial Networks (GANs) have significantly advanced image synthesis through mapping randomly sampled latent codes to high-fidelity synthesized images. However, applying well-trained GANs to real image editing remains challenging. A common solution is to find an approximate latent code that can adequately recover the input image to edit, which is also known as GAN inversion. To invert a GAN model, prior works typically focus on reconstructing the target image at the pixel level, yet few studies are conducted on whether the inverted result can well support manipulation at the semantic level. This work fills in this gap by proposing in-domain GAN inversion, which consists of a domain-guided encoder and a domain-regularized optimizer, to regularize the inverted code in the native latent space of the pre-trained GAN model. In this way, we manage to sufficiently reuse the knowledge learned by GANs for image reconstruction, facilitating a wide range of editing applications without any retraining. We further make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property. Such a trade-off sheds light on how a GAN model represents an image with various semantics encoded in the learned latent distribution.
    Language English
    Publishing date 2024-04-03
    Publishing country United States
    Document type Journal Article
    ISSN 1939-3539
    ISSN (online) 1939-3539
    DOI 10.1109/TPAMI.2023.3310872
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Few-Shot Face Stylization via GAN Prior Distillation.

    Zhao, Ruoyu / Zhu, Mingrui / Wang, Nannan / Gao, Xinbo

    IEEE transactions on neural networks and learning systems

    2024  Volume PP

    Abstract: ... results. In this article, we propose GAN Prior Distillation (GPD) to enable effective few-shot face ... stylization. GPD contains two models: a teacher network with GAN Prior and a student network that fulfills end ...

    Abstract Face stylization has made notable progress in recent years. However, when training on limited data, the performance of existing approaches significantly declines. Although some studies have attempted to tackle this problem, they either failed to achieve the few-shot setting (less than 10) or can only get suboptimal results. In this article, we propose GAN Prior Distillation (GPD) to enable effective few-shot face stylization. GPD contains two models: a teacher network with GAN Prior and a student network that fulfills end-to-end translation. Specifically, we adapt the teacher network trained on large-scale data in the source domain to the target domain using a handful of samples, where it can learn the target domain's knowledge. Then, we can achieve few-shot augmentation by generating source domain and target domain images simultaneously with the same latent codes. We propose an anchor-based knowledge distillation module that can fully use the difference between the training and the augmented data to distill the knowledge of the teacher network into the student network. The trained student network achieves excellent generalization performance with the absorption of additional knowledge. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches in a few-shot setting.
    Language English
    Publishing date 2024-03-27
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2024.3377609
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: CTAB-GAN+: enhancing tabular data synthesis.

    Zhao, Zilong / Kunar, Aditya / Birke, Robert / Van der Scheer, Hiek / Chen, Lydia Y

    Frontiers in big data

    2024  Volume 6, Page(s) 1296508

    Abstract: ... conditional tabular GAN. CTAB-GAN+ improves upon state-of-the-art by (i) adding downstream losses ... to conditional GAN for higher utility synthetic data in both classification and regression domains; (ii) using ... CTAB-GAN+ on statistical similarity and machine learning utility against state-of-the-art tabular GANs ...

    Abstract The usage of synthetic data is gaining momentum in part due to the unavailability of original data due to privacy and legal considerations and in part due to its utility as an augmentation to the authentic data. Generative adversarial networks (GANs), a paragon of generative models, initially for images and subsequently for tabular data, has contributed many of the state-of-the-art synthesizers. As GANs improve, the synthesized data increasingly resemble the real data risking to leak privacy. Differential privacy (DP) provides theoretical guarantees on privacy loss but degrades data utility. Striking the best trade-off remains yet a challenging research question. In this study, we propose CTAB-GAN+ a novel conditional tabular GAN. CTAB-GAN+ improves upon state-of-the-art by (i) adding downstream losses to conditional GAN for higher utility synthetic data in both classification and regression domains; (ii) using Wasserstein loss with gradient penalty for better training convergence; (iii) introducing novel encoders targeting mixed continuous-categorical variables and variables with unbalanced or skewed data; and (iv) training with DP stochastic gradient descent to impose strict privacy guarantees. We extensively evaluate CTAB-GAN+ on statistical similarity and machine learning utility against state-of-the-art tabular GANs. The results show that CTAB-GAN+ synthesizes privacy-preserving data with at least 21.9% higher machine learning utility (i.e., F1-Score) across multiple datasets and learning tasks under given privacy budget.
    Language English
    Publishing date 2024-01-08
    Publishing country Switzerland
    Document type Journal Article
    ISSN 2624-909X
    ISSN (online) 2624-909X
    DOI 10.3389/fdata.2023.1296508
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: FCL-GAN

    Zhao, Suiyi / Zhang, Zhao / Hong, Richang / Xu, Mingliang / Yang, Yi / Wang, Meng

    A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring

    2022  

    Abstract: ... Contrastive Loss Constrained Lightweight CycleGAN (shortly, FCL-GAN), with attractive properties, i.e., no ... datasets demonstrate the effectiveness of our FCL-GAN in terms of performance, model size and reference ... and Meng Wang, "FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring ...

    Abstract Blind image deblurring (BID) remains a challenging and significant task. Benefiting from the strong fitting ability of deep learning, paired data-driven supervised BID method has obtained great progress. However, paired data are usually synthesized by hand, and the realistic blurs are more complex than synthetic ones, which makes the supervised methods inept at modeling realistic blurs and hinders their real-world applications. As such, unsupervised deep BID method without paired data offers certain advantages, but current methods still suffer from some drawbacks, e.g., bulky model size, long inference time, and strict image resolution and domain requirements. In this paper, we propose a lightweight and real-time unsupervised BID baseline, termed Frequency-domain Contrastive Loss Constrained Lightweight CycleGAN (shortly, FCL-GAN), with attractive properties, i.e., no image domain limitation, no image resolution limitation, 25x lighter than SOTA, and 5x faster than SOTA. To guarantee the lightweight property and performance superiority, two new collaboration units called lightweight domain conversion unit(LDCU) and parameter-free frequency-domain contrastive unit(PFCU) are designed. LDCU mainly implements inter-domain conversion in lightweight manner. PFCU further explores the similarity measure, external difference and internal connection between the blurred domain and sharp domain images in frequency domain, without involving extra parameters. Extensive experiments on several image datasets demonstrate the effectiveness of our FCL-GAN in terms of performance, model size and reference time.

    Comment: Please cite this work as: Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Yi Yang and Meng Wang, "FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind Image Deblurring," In: Proceedings of the 30th ACM International Conference on Multimedia (ACM MM), Lisbon, Portugal, June 2022
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-04-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: TBI-GAN

    Zhao, Xiangyu / Zang, Di / Wang, Sheng / Shen, Zhenrong / Xuan, Kai / Wei, Zeyu / Wang, Zhe / Zheng, Ruizhe / Wu, Xuehai / Li, Zheren / Wang, Qian / Qi, Zengxin / Zhang, Lichi

    An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation

    2022  

    Abstract: ... inpainting model named TBI-GAN to synthesize TBI MR scans with paired brain label maps. The main strength ... of our TBI-GAN method is that it can generate TBI images and corresponding label maps simultaneously ... enhance the capacity of data augmentation. Experimental results show that the proposed TBI-GAN method ...

    Abstract Brain network analysis for traumatic brain injury (TBI) patients is critical for its consciousness level assessment and prognosis evaluation, which requires the segmentation of certain consciousness-related brain regions. However, it is difficult to construct a TBI segmentation model as manually annotated MR scans of TBI patients are hard to collect. Data augmentation techniques can be applied to alleviate the issue of data scarcity. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to mimic the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named TBI-GAN to synthesize TBI MR scans with paired brain label maps. The main strength of our TBI-GAN method is that it can generate TBI images and corresponding label maps simultaneously, which has not been achieved in the previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized intensity image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed TBI-GAN method can produce sufficient synthesized TBI images with high quality and valid label maps, which can greatly improve the 2D and 3D traumatic brain segmentation performance compared with the alternatives.
    Keywords Electrical Engineering and Systems Science - Image and Video Processing ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-08-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: CTAB-GAN+

    Zhao, Zilong / Kunar, Aditya / Birke, Robert / Chen, Lydia Y.

    Enhancing Tabular Data Synthesis

    2022  

    Abstract: ... Networks (GAN). As GANs improve the synthesized data increasingly resemble the real data risking to leak ... a novel conditional tabular GAN. CTAB-GAN+ improves upon state-of-the-art by (i) adding downstream losses ... CTAB-GAN+ on data similarity and analysis utility against state-of-the-art tabular GANs. The results ...

    Abstract While data sharing is crucial for knowledge development, privacy concerns and strict regulation (e.g., European General Data Protection Regulation (GDPR)) limit its full effectiveness. Synthetic tabular data emerges as alternative to enable data sharing while fulfilling regulatory and privacy constraints. State-of-the-art tabular data synthesizers draw methodologies from Generative Adversarial Networks (GAN). As GANs improve the synthesized data increasingly resemble the real data risking to leak privacy. Differential privacy (DP) provides theoretical guarantees on privacy loss but degrades data utility. Striking the best trade-off remains yet a challenging research question. We propose CTAB-GAN+ a novel conditional tabular GAN. CTAB-GAN+ improves upon state-of-the-art by (i) adding downstream losses to conditional GANs for higher utility synthetic data in both classification and regression domains; (ii) using Wasserstein loss with gradient penalty for better training convergence; (iii) introducing novel encoders targeting mixed continuous-categorical variables and variables with unbalanced or skewed data; and (iv) training with DP stochastic gradient descent to impose strict privacy guarantees. We extensively evaluate CTAB-GAN+ on data similarity and analysis utility against state-of-the-art tabular GANs. The results show that CTAB-GAN+ synthesizes privacy-preserving data with at least 48.16% higher utility across multiple datasets and learning tasks under different privacy budgets.

    Comment: arXiv admin note: substantial text overlap with arXiv:2102.08369, arXiv:2108.10064
    Keywords Computer Science - Machine Learning
    Publishing date 2022-04-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: PASTA-GAN++

    Xie, Zhenyu / Huang, Zaiyu / Zhao, Fuwei / Dong, Haoye / Kampffmeyer, Michael / Dong, Xin / Zhu, Feida / Liang, Xiaodan

    A Versatile Framework for High-Resolution Unpaired Virtual Try-on

    2022  

    Abstract: ... we propose a characteristic-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN++ (PASTA ... GAN++), to achieve a versatile system for high-resolution unpaired virtual try-on. Specifically ... our PASTA-GAN++ consists of an innovative patch-routed disentanglement module to decouple the intact garment ...

    Abstract Image-based virtual try-on is one of the most promising applications of human-centric image generation due to its tremendous real-world potential. In this work, we take a step forwards to explore versatile virtual try-on solutions, which we argue should possess three main properties, namely, they should support unsupervised training, arbitrary garment categories, and controllable garment editing. To this end, we propose a characteristic-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN++ (PASTA-GAN++), to achieve a versatile system for high-resolution unpaired virtual try-on. Specifically, our PASTA-GAN++ consists of an innovative patch-routed disentanglement module to decouple the intact garment into normalized patches, which is capable of retaining garment style information while eliminating the garment spatial information, thus alleviating the overfitting issue during unsupervised training. Furthermore, PASTA-GAN++ introduces a patch-based garment representation and a patch-guided parsing synthesis block, allowing it to handle arbitrary garment categories and support local garment editing. Finally, to obtain try-on results with realistic texture details, PASTA-GAN++ incorporates a novel spatially-adaptive residual module to inject the coarse warped garment feature into the generator. Extensive experiments on our newly collected UnPaired virtual Try-on (UPT) dataset demonstrate the superiority of PASTA-GAN++ over existing SOTAs and its ability for controllable garment editing.

    Comment: arXiv admin note: substantial text overlap with arXiv:2111.10544
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-07-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top