LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 71

Search options

  1. Book ; Online: Fine-tuned vs. Prompt-tuned Supervised Representations

    Sun, Jingyuan / Moens, Marie-Francine

    Which Better Account for Brain Language Representations?

    2023  

    Abstract: To decipher the algorithm underlying the human brain's language representation, previous work probed brain responses to language input with pre-trained artificial neural network (ANN) models fine-tuned on NLU tasks. However, full fine-tuning generally ... ...

    Abstract To decipher the algorithm underlying the human brain's language representation, previous work probed brain responses to language input with pre-trained artificial neural network (ANN) models fine-tuned on NLU tasks. However, full fine-tuning generally updates the entire parametric space and distorts pre-trained features, cognitively inconsistent with the brain's robust multi-task learning ability. Prompt-tuning, in contrast, protects pre-trained weights and learns task-specific embeddings to fit a task. Could prompt-tuning generate representations that better account for the brain's language representations than fine-tuning? If so, what kind of NLU task leads a pre-trained model to better decode the information represented in the human brain? We investigate these questions by comparing prompt-tuned and fine-tuned representations in neural decoding, that is predicting the linguistic stimulus from the brain activities evoked by the stimulus. We find that on none of the 10 NLU tasks, full fine-tuning significantly outperforms prompt-tuning in neural decoding, implicating that a more brain-consistent tuning method yields representations that better correlate with brain data. Moreover, we identify that tasks dealing with fine-grained concept meaning yield representations that better decode brain activation patterns than other tasks, especially the syntactic chunking task. This indicates that our brain encodes more fine-grained concept information than shallow syntactic information when representing languages.

    Comment: IJCAI 2023
    Keywords Computer Science - Artificial Intelligence ; Computer Science - Computation and Language
    Subject code 006
    Publishing date 2023-10-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Article ; Online: Causal Factor Disentanglement for Few-Shot Domain Adaptation in Video Prediction.

    Cornille, Nathan / Laenen, Katrien / Sun, Jingyuan / Moens, Marie-Francine

    Entropy (Basel, Switzerland)

    2023  Volume 25, Issue 11

    Abstract: An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to ...

    Abstract An important challenge in machine learning is performing with accuracy when few training samples are available from the target distribution. If a large number of training samples from a related distribution are available, transfer learning can be used to improve the performance. This paper investigates how to do transfer learning more effectively if the source and target distributions are related through a Sparse Mechanism Shift for the application of next-frame prediction. We create Sparse Mechanism Shift-TempoRal Intervened Sequences (SMS-TRIS), a benchmark to evaluate transfer learning for next-frame prediction derived from the TRIS datasets. We then propose to exploit the Sparse Mechanism Shift property of the distribution shift by disentangling the model parameters with regard to the true causal mechanisms underlying the data. We use the Causal Identifiability from TempoRal Intervened Sequences (CITRIS) model to achieve this disentanglement via causal representation learning. We show that encouraging disentanglement with the CITRIS extensions can improve performance, but their effectiveness varies depending on the dataset and backbone used. We find that it is effective only when encouraging disentanglement actually succeeds in increasing disentanglement. We also show that an alternative method designed for domain adaptation does not help, indicating the challenging nature of the SMS-TRIS benchmark.
    Language English
    Publishing date 2023-11-17
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2014734-X
    ISSN 1099-4300 ; 1099-4300
    ISSN (online) 1099-4300
    ISSN 1099-4300
    DOI 10.3390/e25111554
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Decoding Realistic Images from Brain Activity with Contrastive Self-supervision and Latent Diffusion

    Sun, Jingyuan / Li, Mingxiao / Moens, Marie-Francine

    2023  

    Abstract: Reconstructing visual stimuli from human brain activities provides a promising opportunity to advance our understanding of the brain's visual system and its connection with computer vision models. Although deep generative models have been employed for ... ...

    Abstract Reconstructing visual stimuli from human brain activities provides a promising opportunity to advance our understanding of the brain's visual system and its connection with computer vision models. Although deep generative models have been employed for this task, the challenge of generating high-quality images with accurate semantics persists due to the intricate underlying representations of brain signals and the limited availability of parallel data. In this paper, we propose a two-phase framework named Contrast and Diffuse (CnD) to decode realistic images from functional magnetic resonance imaging (fMRI) recordings. In the first phase, we acquire representations of fMRI data through self-supervised contrastive learning. In the second phase, the encoded fMRI representations condition the diffusion model to reconstruct visual stimulus through our proposed concept-aware conditioning method. Experimental results show that CnD reconstructs highly plausible images on challenging benchmarks. We also provide a quantitative interpretation of the connection between the latent diffusion model (LDM) components and the human brain's visual system. In summary, we present an effective approach for reconstructing visual stimuli based on human brain activity and offer a novel framework to understand the relationship between the diffusion model and the human brain visual system.

    Comment: 8 pages,5 figures
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-09-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Tuning In to Neural Encoding

    Sun, Jingyuan / Zhang, Xiaohan / Moens, Marie-Francine

    Linking Human Brain and Artificial Supervised Representations of Language

    2023  

    Abstract: To understand the algorithm that supports the human brain's language representation, previous research has attempted to predict neural responses to linguistic stimuli using embeddings generated by artificial neural networks (ANNs), a process known as ... ...

    Abstract To understand the algorithm that supports the human brain's language representation, previous research has attempted to predict neural responses to linguistic stimuli using embeddings generated by artificial neural networks (ANNs), a process known as neural encoding. However, most of these studies have focused on probing neural representations of Germanic languages, such as English, with unsupervised ANNs. In this paper, we propose to bridge the gap between human brain and supervised ANN representations of the Chinese language. Specifically, we investigate how task tuning influences a pretained Transformer for neural encoding and which tasks lead to the best encoding performances. We generate supervised representations on eight Natural Language Understanding (NLU) tasks using prompt-tuning, a technique that is seldom explored in neural encoding for language. We demonstrate that prompt-tuning yields representations that better predict neural responses to Chinese stimuli than traditional fine-tuning on four tasks. Furthermore, we discover that tasks that require a fine-grained processing of concepts and entities lead to representations that are most predictive of brain activation patterns. Additionally, we reveal that the proportion of tuned parameters highly influences the neural encoding performance of fine-tuned models. Overall, our experimental findings could help us better understand the relationship between supervised artificial and brain language representations.

    Comment: ECAI 2023
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence
    Subject code 401
    Publishing date 2023-10-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Author Correction: An fMRI Dataset for Concept Representation with Semantic Feature Annotations.

    Wang, Shaonan / Zhang, Yunhao / Zhang, Xiaohan / Sun, Jingyuan / Lin, Nan / Zhang, Jiajun / Zong, Chengqing

    Scientific data

    2023  Volume 10, Issue 1, Page(s) 561

    Language English
    Publishing date 2023-08-23
    Publishing country England
    Document type Published Erratum
    ZDB-ID 2775191-0
    ISSN 2052-4463 ; 2052-4463
    ISSN (online) 2052-4463
    ISSN 2052-4463
    DOI 10.1038/s41597-023-02480-w
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article: Transcriptome and metabolite analyses indicated the underlying molecular responses of Asian ginseng (

    Xia, Jinglin / Liu, Ning / Han, Junyou / Sun, Jingyuan / Xu, Tianyi / Liu, Shouan

    Frontiers in plant science

    2023  Volume 14, Page(s) 1182685

    Abstract: ... Panax ... ...

    Abstract Panax ginseng
    Language English
    Publishing date 2023-07-10
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2613694-6
    ISSN 1664-462X
    ISSN 1664-462X
    DOI 10.3389/fpls.2023.1182685
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Single-cell Transcriptomic Analysis Reveals an Immunosuppressive Network Between POSTN CAFs and ACKR1 ECs in TKI-resistant Lung Cancer.

    Wang, Zhiyi / Yan, Ning / Sheng, Hailong / Xiao, Yazhi / Sun, Jingyuan / Cao, Chuanhui

    Cancer genomics & proteomics

    2023  Volume 21, Issue 1, Page(s) 65–78

    Abstract: Background/aim: Tyrosine kinase inhibitor (TKI) therapy, a principal treatment for advanced non-small cell lung cancer (NSCLC), frequently encounters the development of drug resistance. The tumor microenvironment (TME) plays a critical role in the ... ...

    Abstract Background/aim: Tyrosine kinase inhibitor (TKI) therapy, a principal treatment for advanced non-small cell lung cancer (NSCLC), frequently encounters the development of drug resistance. The tumor microenvironment (TME) plays a critical role in the progression of NSCLC, yet the relationship between endothelial cells (ECs) and cancer-associated fibroblasts (CAFs) subpopulations in TKI treatment resistance remains largely unexplored.
    Materials and methods: The BioProject database PRJNA591860 project was used to analyze scRNA-seq data including 49 advanced-stage NSCLC samples across three different time points: pre-targeted therapy (naïve), post-partial response (PR) to targeted therapy, and post-progressive disease (PD) stage. The data involved clustering stromal cells into multiple CAFs and ECs subpopulations. The abundance changes and functions of each cluster during TKI treatment were investigated by KEGG and GO analysis. Additionally, we identified specific transcription factors and metabolic pathways via DoRothEA and scMetabolism. Moreover, cell-cell communications between PD and PR stages were compared by CellChat.
    Results: ECs and CAFs were clustered and annotated using 49 scRNA-seq samples. We identified seven ECs subpopulations, with OIT3 ECs showing enrichment in the PR phase with a drug-resistance phenotype, and ACKR1 ECs being prevalent in the PD phase with enhanced cell adhesion. Similarly, CAFs were clustered into 7 subpopulations. PLA2G2A CAFs were predominant in PR, whereas POSTN CAFs were prevalent in PD, characterized by an immunomodulatory phenotype and increased collagen secretion. CellChat analysis showed that ACKR1 ECs strongly interacted with macrophage through the CD39 pathway and POSTN CAFs secreted Tenascin-C (TNC) to promote the progression of epithelial cells, primarily malignant ones, in PD.
    Conclusion: This study reveals that POSTN CAFs and ACKR1 ECs are associated with resistance to TKI treatment, based on single-cell sequencing.
    MeSH term(s) Humans ; Cancer-Associated Fibroblasts/metabolism ; Carcinoma, Non-Small-Cell Lung/drug therapy ; Carcinoma, Non-Small-Cell Lung/genetics ; Carcinoma, Non-Small-Cell Lung/metabolism ; Cell Adhesion Molecules/metabolism ; Endothelial Cells/metabolism ; Endothelial Cells/pathology ; Gene Expression Profiling ; Lung Neoplasms/drug therapy ; Lung Neoplasms/genetics ; Lung Neoplasms/pathology ; Tumor Microenvironment/genetics
    Chemical Substances Cell Adhesion Molecules ; POSTN protein, human ; ACKR1 protein, human
    Language English
    Publishing date 2023-12-27
    Publishing country Greece
    Document type Journal Article
    ZDB-ID 2144517-5
    ISSN 1790-6245 ; 1109-6535
    ISSN (online) 1790-6245
    ISSN 1109-6535
    DOI 10.21873/cgp.20430
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Neural Encoding and Decoding With Distributed Sentence Representations.

    Sun, Jingyuan / Wang, Shaonan / Zhang, Jiajun / Zong, Chengqing

    IEEE transactions on neural networks and learning systems

    2021  Volume 32, Issue 2, Page(s) 589–603

    Abstract: Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has ... ...

    Abstract Building computational models to account for the cortical representation of language plays an important role in understanding the human linguistic system. Recent progress in distributed semantic models (DSMs), especially transformer-based methods, has driven advances in many language understanding tasks, making DSM a promising methodology to probe brain language processing. DSMs have been shown to reliably explain cortical responses to word stimuli. However, characterizing the brain activities for sentence processing is much less exhaustively explored with DSMs, especially the deep neural network-based methods. What is the relationship between cortical sentence representations against DSMs? What linguistic features that a DSM catches better explain its correlation with the brain activities aroused by sentence stimuli? Could distributed sentence representations help to reveal the semantic selectivity of different brain areas? We address these questions through the lens of neural encoding and decoding, fueled by the latest developments in natural language representation learning. We begin by evaluating the ability of a wide range of 12 DSMs to predict and decipher the functional magnetic resonance imaging (fMRI) images from humans reading sentences. Most models deliver high accuracy in the left middle temporal gyrus (LMTG) and left occipital complex (LOC). Notably, encoders trained with transformer-based DSMs consistently outperform other unsupervised structured models and all the unstructured baselines. With probing and ablation tasks, we further find that differences in the performance of the DSMs in modeling brain activities can be at least partially explained by the granularity of their semantic representations. We also illustrate the DSM's selectivity for concept categories and show that the topics are represented by spatially overlapping and distributed cortical patterns. Our results corroborate and extend previous findings in understanding the relation between DSMs and neural activation patterns and contribute to building solid brain-machine interfaces with deep neural network representations.
    MeSH term(s) Algorithms ; Brain/diagnostic imaging ; Brain-Computer Interfaces ; Cerebral Cortex/anatomy & histology ; Cerebral Cortex/physiology ; Computer Simulation ; Deep Learning ; Humans ; Image Processing, Computer-Assisted ; Language ; Linguistics ; Magnetic Resonance Imaging ; Natural Language Processing ; Neural Networks, Computer ; Occipital Lobe/diagnostic imaging ; Reading ; Reproducibility of Results ; Semantics ; Temporal Lobe/diagnostic imaging
    Language English
    Publishing date 2021-02-04
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2020.3027595
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article: Molecular Dynamics Simulation of the Nascent Polyethylene Crystallization in Confined Space: Nucleation and Lamella Orientation

    Chen, Siyu / Chen, Wei / Ren, Ying / Sun, Jingyuan / Wang, Jingdai / Yang, Yongrong

    Macromolecules. 2022 Aug. 15, v. 55, no. 17

    2022  

    Abstract: Different nascent structures during polyethylene growth can impact the final polymer properties. So far, the evolution of the aggregation structure of nascent polyethylene is still unknown. We have developed a nascent polyethylene model and investigated ... ...

    Abstract Different nascent structures during polyethylene growth can impact the final polymer properties. So far, the evolution of the aggregation structure of nascent polyethylene is still unknown. We have developed a nascent polyethylene model and investigated the effect of confined space on the nucleation and crystallization process. The in situ polymerization characteristics were modeled through the chain-end fixation, temperature gradient, and sidewall setting. We observed a compromise in competition between high undercooling and heterogeneous nucleation in reducing the nucleation free energy, reflecting two nucleation mechanisms for different sidewall settings. Moreover, the contact of sidewall and lamella growth front with different directions had a diverse effect on the development of lamella. The contact of the side face can redress the lamella tilt angle and make its orientation along the Z-axis, while contact of the thickness face is proved to inhibit the increase of the stem length and reduce the thickness of the lamella. The results shed light on the crystallization process of nascent polyethylene and help to explain the role of the polyhedral oligomeric silsesquioxane modification at a microscopic level.
    Keywords Gibbs free energy ; crystallization ; evolution ; face ; models ; molecular dynamics ; polyethylene ; polymerization ; silsesquioxanes ; temperature
    Language English
    Dates of publication 2022-0815
    Size p. 7368-7379.
    Publishing place American Chemical Society
    Document type Article
    ZDB-ID 1491942-4
    ISSN 1520-5835 ; 0024-9297
    ISSN (online) 1520-5835
    ISSN 0024-9297
    DOI 10.1021/acs.macromol.2c01098
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

  10. Article: Structural Design and Performance of a Jet-Impinging Type Microbubble Generator

    Shuai, Yun / Wang, Xinyan / Huang, Zhengliang / Yang, Yao / Sun, Jingyuan / Wang, Jingdai / Yang, Yongrong

    Industrial & engineering chemistry process design and development. 2022 Mar. 18, v. 61, no. 12

    2022  

    Abstract: A jet-impinging type microbubble generator is designed, including a liquid nozzle and a gas nozzle with the disc-baffle closely arranged along the same axis. When the disc-baffle diameter is greater than the jet width at the gas nozzle position, the ... ...

    Abstract A jet-impinging type microbubble generator is designed, including a liquid nozzle and a gas nozzle with the disc-baffle closely arranged along the same axis. When the disc-baffle diameter is greater than the jet width at the gas nozzle position, the impingement zone and radial wall-jet region are formed, thereby promoting the generation and dispersion of microbubbles. As the dissipation rate in the impingement zone increases, the mean bubble diameter decreases, and the number fraction of microbubbles increases. The bubble radial dispersion width is dependent on the radial wall-jet velocity and the time for the bubble to follow the radial wall-jet. Reducing the surface tension is beneficial to the generation of microbubbles but has little effect on the state or width of the microbubble dispersion. Empirical correlations for predicting the d₃₂ and dispersion width are proposed, and the relative error is less than 20%. The results show that the jet-impinging type microbubble generator has better energy efficiency than most bubble generating systems reported in the literature.
    Keywords energy efficiency ; liquids ; microbubbles ; process design ; surface tension
    Language English
    Dates of publication 2022-0318
    Size p. 4445-4459.
    Publishing place American Chemical Society
    Document type Article
    ZDB-ID 1484436-9
    ISSN 1520-5045 ; 0888-5885
    ISSN (online) 1520-5045
    ISSN 0888-5885
    DOI 10.1021/acs.iecr.1c04499
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

To top