LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 67

Search options

  1. Article ; Online: CMFAN: Cross-Modal Feature Alignment Network for Few-Shot Single-View 3D Reconstruction.

    Lai, Lvlong / Chen, Jian / Zhang, Zehong / Lin, Guosheng / Wu, Qingyao

    IEEE transactions on neural networks and learning systems

    2024  Volume PP

    Abstract: Few-shot single-view 3D reconstruction learns to reconstruct the novel category objects based on a query image and a few support shapes. However, since the query image and the support shapes are of different modalities, there is an inherent feature ... ...

    Abstract Few-shot single-view 3D reconstruction learns to reconstruct the novel category objects based on a query image and a few support shapes. However, since the query image and the support shapes are of different modalities, there is an inherent feature misalignment problem damaging the reconstruction. Previous works in the literature do not consider this problem. To this end, we propose the cross-modal feature alignment network (CMFAN) with two novel techniques. One is a strategy for model pretraining, namely, cross-modal contrastive learning (CMCL), here the 2D images and 3D shapes of the same objects compose the positives, and those from different objects form the negatives. With CMCL, the model learns to embed the 2D and 3D modalities of the same object into a tight area in the feature space and push away those from different objects, thus effectively aligning the global cross-modal features. The other is cross-modal feature fusion (CMFF), which further aligns and fuses the local features. Specifically, it first re-represents the local features with the cross-attention operation, making the local features share more information. Then, CMFF generates a descriptor for the support features and attaches it to each local feature vector of the query image with dense concatenation. Moreover, CMFF can be applied to multilevel local features and brings further advantages. We conduct extensive experiments to evaluate the effectiveness of our designs, and CMFAN sets new state-of-the-art performance in all of the 1-/10-/25-shot tasks of ShapeNet and ModelNet datasets.
    Language English
    Publishing date 2024-04-09
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2024.3383039
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: iEnhancer-DCSA: identifying enhancers via dual-scale convolution and spatial attention.

    Wang, Wenjun / Wu, Qingyao / Li, Chunshan

    BMC genomics

    2023  Volume 24, Issue 1, Page(s) 393

    Abstract: Background: Due to the dynamic nature of enhancers, identifying enhancers and their strength are major bioinformatics challenges. With the development of deep learning, several models have facilitated enhancers detection in recent years. However, ... ...

    Abstract Background: Due to the dynamic nature of enhancers, identifying enhancers and their strength are major bioinformatics challenges. With the development of deep learning, several models have facilitated enhancers detection in recent years. However, existing studies either neglect different length motifs information or treat the features at all spatial locations equally. How to effectively use multi-scale motifs information while ignoring irrelevant information is a question worthy of serious consideration. In this paper, we propose an accurate and stable predictor iEnhancer-DCSA, mainly composed of dual-scale fusion and spatial attention, automatically extracting features of different length motifs and selectively focusing on the important features.
    Results: Our experimental results demonstrate that iEnhancer-DCSA is remarkably superior to existing state-of-the-art methods on the test dataset. Especially, the accuracy and MCC of enhancer identification are improved by 3.45% and 9.41%, respectively. Meanwhile, the accuracy and MCC of enhancer classification are improved by 7.65% and 18.1%, respectively. Furthermore, we conduct ablation studies to demonstrate the effectiveness of dual-scale fusion and spatial attention.
    Conclusions: iEnhancer-DCSA will be a valuable computational tool in identifying and classifying enhancers, especially for those not included in the training dataset.
    MeSH term(s) Enhancer Elements, Genetic ; Computational Biology/methods
    Language English
    Publishing date 2023-07-13
    Publishing country England
    Document type Journal Article
    ZDB-ID 2041499-7
    ISSN 1471-2164 ; 1471-2164
    ISSN (online) 1471-2164
    ISSN 1471-2164
    DOI 10.1186/s12864-023-09468-1
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: High-valence metal sites induced by heterostructure engineering for promoting 5-hydroxymethylfurfural electrooxidation and hydrogen generation.

    Shang, Ningzhao / Li, Wenjiong / Wu, Qingyao / Li, Huafan / Wang, Hongchao / Wang, Chun / Bai, Guoyi

    Journal of colloid and interface science

    2024  Volume 659, Page(s) 621–628

    Abstract: The electrocatalytic 5-hydroxymethylfurfural (HMF) oxidation reaction coupling with hydrogen evolution reaction (HER) serves as a promising strategy to generate both high-value-added products and clean energy, which is limited by the poor catalytic ... ...

    Abstract The electrocatalytic 5-hydroxymethylfurfural (HMF) oxidation reaction coupling with hydrogen evolution reaction (HER) serves as a promising strategy to generate both high-value-added products and clean energy, which is limited by the poor catalytic efficiency of bifunctional electrocatalysts and unclear electrocatalytic mechanism for HMF oxidation reaction. Herein, we fabricate a bifunctional NiSe
    Language English
    Publishing date 2024-01-06
    Publishing country United States
    Document type Journal Article
    ZDB-ID 241597-5
    ISSN 1095-7103 ; 0021-9797
    ISSN (online) 1095-7103
    ISSN 0021-9797
    DOI 10.1016/j.jcis.2024.01.040
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: Spatial-Semantic Collaborative Cropping for User Generated Content

    Su, Yukun / Cao, Yiwen / Deng, Jingliang / Rao, Fengyun / Wu, Qingyao

    2024  

    Abstract: A large amount of User Generated Content (UGC) is uploaded to the Internet daily and displayed to people world-widely through the client side (e.g., mobile and PC). This requires the cropping algorithms to produce the aesthetic thumbnail within a ... ...

    Abstract A large amount of User Generated Content (UGC) is uploaded to the Internet daily and displayed to people world-widely through the client side (e.g., mobile and PC). This requires the cropping algorithms to produce the aesthetic thumbnail within a specific aspect ratio on different devices. However, existing image cropping works mainly focus on landmark or landscape images, which fail to model the relations among the multi-objects with the complex background in UGC. Besides, previous methods merely consider the aesthetics of the cropped images while ignoring the content integrity, which is crucial for UGC cropping. In this paper, we propose a Spatial-Semantic Collaborative cropping network (S2CNet) for arbitrary user generated content accompanied by a new cropping benchmark. Specifically, we first mine the visual genes of the potential objects. Then, the suggested adaptive attention graph recasts this task as a procedure of information association over visual nodes. The underlying spatial and semantic relations are ultimately centralized to the crop candidate through differentiable message passing, which helps our network efficiently to preserve both the aesthetics and the content integrity. Extensive experiments on the proposed UGCrop5K and other public datasets demonstrate the superiority of our approach over state-of-the-art counterparts. Our project is available at https://github.com/suyukun666/S2CNet.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 700
    Publishing date 2024-01-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Fast Manifold Ranking with Local Bipartite Graph.

    Chen, Xiaojun / Ye, Yuzhong / Wu, Qingyao / Nie, Feiping

    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

    2021  Volume PP

    Abstract: During the past decades, manifold ranking has been widely applied to content-based image retrieval and shown excellent performance. However, manifold ranking is computationally expensive in both graph construction and ranking learning. Much effort has ... ...

    Abstract During the past decades, manifold ranking has been widely applied to content-based image retrieval and shown excellent performance. However, manifold ranking is computationally expensive in both graph construction and ranking learning. Much effort has been devoted to improve its performance by introducing approximating techniques. In this paper, we propose a fast manifold ranking method, namely Local Bipartite Manifold Ranking (LBMR). Given a set of images, we first extract multiple regions from each image to form a large image descriptor matrix, and then use the anchor-based strategy to construct a local bipartite graph in which a regional k-means (RKM) is proposed to obtain high quality anchors. We propose an iterative method to directly solve the manifold ranking problem from the local bipartite graph, which monotonically decreases the objective function value in each iteration until the algorithm converges. Experimental results on several real-world image datasets demonstrate the effectiveness and efficiency of our proposed method.
    Language English
    Publishing date 2021-07-15
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0042
    ISSN (online) 1941-0042
    DOI 10.1109/TIP.2021.3096082
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: SARA

    Zhong, Xiaojing / Huang, Xinyi / Wu, Zhonghua / Lin, Guosheng / Wu, Qingyao

    Controllable Makeup Transfer with Spatial Alignment and Region-Adaptive Normalization

    2023  

    Abstract: Makeup transfer is a process of transferring the makeup style from a reference image to the source images, while preserving the source images' identities. This technique is highly desirable and finds many applications. However, existing methods lack fine- ...

    Abstract Makeup transfer is a process of transferring the makeup style from a reference image to the source images, while preserving the source images' identities. This technique is highly desirable and finds many applications. However, existing methods lack fine-level control of the makeup style, making it challenging to achieve high-quality results when dealing with large spatial misalignments. To address this problem, we propose a novel Spatial Alignment and Region-Adaptive normalization method (SARA) in this paper. Our method generates detailed makeup transfer results that can handle large spatial misalignments and achieve part-specific and shade-controllable makeup transfer. Specifically, SARA comprises three modules: Firstly, a spatial alignment module that preserves the spatial context of makeup and provides a target semantic map for guiding the shape-independent style codes. Secondly, a region-adaptive normalization module that decouples shape and makeup style using per-region encoding and normalization, which facilitates the elimination of spatial misalignments. Lastly, a makeup fusion module blends identity features and makeup style by injecting learned scale and bias parameters. Experimental results show that our SARA method outperforms existing methods and achieves state-of-the-art performance on two public datasets.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-11-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: DI-Net

    Zhong, Xiaojing / Su, Yukun / Wu, Zhonghua / Lin, Guosheng / Wu, Qingyao

    Decomposed Implicit Garment Transfer Network for Digital Clothed 3D Human

    2023  

    Abstract: 3D virtual try-on enjoys many potential applications and hence has attracted wide attention. However, it remains a challenging task that has not been adequately solved. Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack ... ...

    Abstract 3D virtual try-on enjoys many potential applications and hence has attracted wide attention. However, it remains a challenging task that has not been adequately solved. Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel. Besides, 3D virtual try-on approaches are mostly built on the fixed topological structure and with heavy computation. To deal with these problems, we propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result and preserve the texture from an arbitrary perspective. Specifically, DI-Net consists of two modules: 1) A complementary warping module that warps the reference image to have the same pose as the source image through dense correspondence learning and sparse flow learning; 2) A geometry-aware decomposed transfer module that decomposes the garment transfer into image layout based transfer and texture based transfer, achieving surface and texture reconstruction by constructing pixel-aligned implicit functions. Experimental results show the effectiveness and superiority of our method in the 3D virtual try-on task, which can yield more high-quality results over other existing methods.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-11-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: CycleSegNet: Object Co-Segmentation With Cycle Refinement and Region Correspondence.

    Zhang, Chi / Li, Guankai / Lin, Guosheng / Wu, Qingyao / Yao, Rui

    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

    2021  Volume 30, Page(s) 5652–5664

    Abstract: Image co-segmentation is an active computer vision task that aims to segment the common objects from a set of images. Recently, researchers design various learning-based algorithms to undertake the co-segmentation task. The main difficulty in this task ... ...

    Abstract Image co-segmentation is an active computer vision task that aims to segment the common objects from a set of images. Recently, researchers design various learning-based algorithms to undertake the co-segmentation task. The main difficulty in this task is how to effectively transfer information between images to make conditional predictions. In this paper, we present CycleSegNet, a novel framework for the co-segmentation task. Our network design has two key components: a region correspondence module which is the basic operation for exchanging information between local image regions, and a cycle refinement module, which utilizes ConvLSTMs to progressively update image representations and exchange information in a cycle and iterative manner. Extensive experiments demonstrate that our proposed method significantly outperforms the state-of-the-art methods on four popular benchmark datasets - PASCAL VOC dataset, MSRC dataset, Internet dataset, and iCoseg dataset, by 2.6%, 7.7%, 2.2%, and 2.9%, respectively.
    Language English
    Publishing date 2021-06-18
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0042
    ISSN (online) 1941-0042
    DOI 10.1109/TIP.2021.3087401
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Towards effective deep transfer via attentive feature alignment.

    Xie, Zheng / Wen, Zhiquan / Wang, Yaowei / Wu, Qingyao / Tan, Mingkui

    Neural networks : the official journal of the International Neural Network Society

    2021  Volume 138, Page(s) 98–109

    Abstract: Training a deep convolutional network from scratch requires a large amount of labeled data, which however may not be available for many practical tasks. To alleviate the data burden, a practical approach is to adapt a pre-trained model learned on the ... ...

    Abstract Training a deep convolutional network from scratch requires a large amount of labeled data, which however may not be available for many practical tasks. To alleviate the data burden, a practical approach is to adapt a pre-trained model learned on the large source domain to the target domain, but the performance can be limited when the source and target domain data distributions have large differences. Some recent works attempt to alleviate this issue by imposing feature alignment over the intermediate feature maps between the source and target networks. However, for a source model, many of the channels/spatial-features for each layer can be irrelevant to the target task. Thus, directly applying feature alignment may not achieve promising performance. In this paper, we propose an Attentive Feature Alignment (AFA) method for effective domain knowledge transfer by identifying and attending on the relevant channels and spatial features between two domains. To this end, we devise two learnable attentive modules at both the channel and spatial levels. We then sequentially perform attentive spatial- and channel-level feature alignments between the source and target networks, in which the target model and attentive module are learned simultaneously. Moreover, we theoretically analyze the generalization performance of our method, which confirms its superiority to existing methods. Extensive experiments on both image classification and face recognition demonstrate the effectiveness of our method. The source code and the pre-trained models are available at https://github.com/xiezheng-cs/AFAhttps://github.com/xiezheng-cs/AFA.
    MeSH term(s) Machine Learning ; Software/standards
    Language English
    Publishing date 2021-02-10
    Publishing country United States
    Document type Journal Article
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2021.01.022
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Semisupervised Feature Selection via Structured Manifold Learning.

    Chen, Xiaojun / Chen, Renjie / Wu, Qingyao / Nie, Feiping / Yang, Min / Mao, Rui

    IEEE transactions on cybernetics

    2022  Volume 52, Issue 7, Page(s) 5756–5766

    Abstract: Recently, semisupervised feature selection has gained more attention in many real applications due to the high cost of obtaining labeled data. However, existing methods cannot solve the "multimodality" problem that samples in some classes lie in several ... ...

    Abstract Recently, semisupervised feature selection has gained more attention in many real applications due to the high cost of obtaining labeled data. However, existing methods cannot solve the "multimodality" problem that samples in some classes lie in several separate clusters. To solve the multimodality problem, this article proposes a new feature selection method for semisupervised task, namely, semisupervised structured manifold learning (SSML). The new method learns a new structured graph which consists of more clusters than the known classes. Meanwhile, we propose to exploit the submanifold in both labeled data and unlabeled data by consuming the nearest neighbors of each object in both labeled and unlabeled objects. An iterative optimization algorithm is proposed to solve the new model. A series of experiments was conducted on both synthetic and real-world datasets and the experimental results verify the ability of the new method to solve the multimodality problem and its superior performance compared with the state-of-the-art methods.
    MeSH term(s) Algorithms ; Cluster Analysis ; Learning ; Supervised Machine Learning
    Language English
    Publishing date 2022-07-04
    Publishing country United States
    Document type Journal Article
    ISSN 2168-2275
    ISSN (online) 2168-2275
    DOI 10.1109/TCYB.2021.3052847
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top