LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 4 of total 4

Search options

  1. Article ; Online: Anodic Reconstructed p

    Liang, Jiehui / Liu, Peixin / Xie, Shaohua / Liu, Qianhu / Wang, Junkun / Guo, Jiansen / Wu, Haoyang / Wang, Wenliang / Li, Guoqiang

    Small (Weinheim an der Bergstrasse, Germany)

    2024  , Page(s) e2400096

    Abstract: The extremely poor solution stability and massive carrier recombination have seriously prevented III-V semiconductor nanomaterials from efficient and stable hydrogen production. In this work, an anodic reconstruction strategy based on group III-V active ... ...

    Abstract The extremely poor solution stability and massive carrier recombination have seriously prevented III-V semiconductor nanomaterials from efficient and stable hydrogen production. In this work, an anodic reconstruction strategy based on group III-V active semiconductors is proposed for the first time, resulting in 19-times photo-gain. What matters most is that the device after anodic reconstruction shows very superior stability under the protracted photoelectrochemical (PEC) test over 8100 s, while the final photocurrent density does not decrease but rather increases by 63.15%. Using the experiment and DFT theoretical calculation, the anodic reconstruction mechanism is elucidated: through the oxidation of indium clusters and the migration of arsenic atoms, the reconstruction formed p
    Language English
    Publishing date 2024-03-22
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2168935-0
    ISSN 1613-6829 ; 1613-6810
    ISSN (online) 1613-6829
    ISSN 1613-6810
    DOI 10.1002/smll.202400096
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: MASS: Modality-collaborative semi-supervised segmentation by exploiting cross-modal consistency from unpaired CT and MRI images.

    Chen, Xiaoyu / Zhou, Hong-Yu / Liu, Feng / Guo, Jiansen / Wang, Liansheng / Yu, Yizhou

    Medical image analysis

    2022  Volume 80, Page(s) 102506

    Abstract: Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, ... ...

    Abstract Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, traditional semi-supervised segmentation methods fail to exploit unpaired multi-modal data, which are widely adopted in today's clinical routine. In this paper, we address this point by proposing Modality-collAborative Semi-Supervised segmentation (i.e., MASS), which utilizes the modality-independent knowledge learned from unpaired CT and MRI scans. To exploit such knowledge, MASS uses cross-modal consistency to regularize deep segmentation models in aspects of both semantic and anatomical spaces, from which MASS learns intra- and inter-modal correspondences to warp atlases' labels for making predictions. For better capturing inter-modal correspondence, from a perspective of feature alignment, we propose a contrastive similarity loss to regularize the latent space of both modalities in order to learn generalized and robust modality-independent representations. Compared to semi-supervised and multi-modal segmentation counterparts, the proposed MASS brings nearly 6% improvements under extremely limited supervision.
    MeSH term(s) Deep Learning ; Humans ; Magnetic Resonance Imaging ; Tomography, X-Ray Computed
    Language English
    Publishing date 2022-06-05
    Publishing country Netherlands
    Document type Journal Article
    ZDB-ID 1356436-5
    ISSN 1361-8423 ; 1361-8431 ; 1361-8415
    ISSN (online) 1361-8423 ; 1361-8431
    ISSN 1361-8415
    DOI 10.1016/j.media.2022.102506
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer.

    Zhou, Hong-Yu / Guo, Jiansen / Zhang, Yinghao / Han, Xiaoguang / Yu, Lequan / Wang, Liansheng / Yu, Yizhou

    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

    2023  Volume 32, Page(s) 4036–4045

    Abstract: Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks ...

    Abstract Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.
    Language English
    Publishing date 2023-07-19
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0042
    ISSN (online) 1941-0042
    DOI 10.1109/TIP.2023.3293771
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: nnFormer

    Zhou, Hong-Yu / Guo, Jiansen / Zhang, Yinghao / Yu, Lequan / Wang, Liansheng / Yu, Yizhou

    Interleaved Transformer for Volumetric Segmentation

    2021  

    Abstract: Transformers, the default model of choices in natural language processing, have drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional ... ...

    Abstract Transformers, the default model of choices in natural language processing, have drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks (convnets) to overcome its inherent shortcomings of spatial inductive bias. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations without investigating how to optimally combine self-attention (i.e., the core of transformers) with convolution. To address this issue, in this paper, we introduce nnFormer (i.e., Not-aNother transFormer), a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution. In practice, nnFormer learns volumetric representations from 3D local volumes. Compared to the naive voxel-level self-attention implementation, such volume-based operations help to reduce the computational complexity by approximate 98% and 99.5% on Synapse and ACDC datasets, respectively. In comparison to prior-art network configurations, nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC. For instance, nnFormer outperforms Swin-UNet by over 7 percents on Synapse. Even when compared to nnUNet, currently the best performing fully-convolutional medical segmentation network, nnFormer still provides slightly better performance on Synapse and ACDC.

    Comment: Codes and models are available at https://github.com/282857341/nnFormer
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2021-09-07
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top