LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 1 of total 1

Search options

Article: Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention.

Zongren, Li / Silamu, Wushouer / Shurui, Feng / Guanghui, Yan

Frontiers in neuroscience

2023  Volume 17, Page(s) 1192867

Abstract: Introduction: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the ... ...

Abstract Introduction: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the acquisition of short-term and long-term visual dependencies through self-attention mechanisms; this technology can efficiently learn global and remote semantic information interactions. However, there are certain challenges associated with the use of Transformers. The computational cost of the global self-attention mechanism increases quadratically, thus hindering the application of Transformers for high-resolution images.
Methods: In view of this, this paper proposes a multi-view brain tumor segmentation model based on cross windows and focal self-attention which represents a novel mechanism to enlarge the receptive field by parallel cross windows and improve global dependence by using local fine-grained and global coarse-grained interactions. First, the receiving field is increased by parallelizing the self-attention of horizontal and vertical fringes in the cross window, thus achieving strong modeling capability while limiting the computational cost. Second, the focus on self-attention with regards to local fine-grained and global coarse-grained interactions enables the model to capture short-term and long-term visual dependencies in an efficient manner.
Results: Finally, the performance of the model on Brats2021 verification set is as follows: dice Similarity Score of 87.28, 87.35 and 93.28%; Hausdorff Distance (95%) of 4.58 mm, 5.26 mm, 3.78 mm for the enhancing tumor, tumor core and whole tumor, respectively.
Discussion: In summary, the model proposed in this paper has achieved excellent performance while limiting the computational cost.
Language English
Publishing date 2023-05-12
Publishing country Switzerland
Document type Journal Article
ZDB-ID 2411902-7
ISSN 1662-453X ; 1662-4548
ISSN (online) 1662-453X
ISSN 1662-4548
DOI 10.3389/fnins.2023.1192867
Database MEDical Literature Analysis and Retrieval System OnLINE

More links

Kategorien

To top