LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Your last searches

  1. AU="Cohen, Or"
  2. AU="Abdellatifi, Mohamed"
  3. AU="Luebbe, Elizabeth"
  4. AU="Emidio, Adriana"
  5. AU=Masmejan Sophie
  6. AU="Samantha Ridley"
  7. AU="Moghaddam-Alvandi, Arash"
  8. AU="Khanolkar, Amey R."
  9. AU="Vasquez Martinez, Rodolfo"
  10. AU="Morgan, E"

Search results

Result 1 - 10 of total 145

Search options

  1. Article ; Online: β-Hydroxy-β-methylbutyrate (HMB) leads to phospholipase D2 (PLD2) activation and alters circadian rhythms in myotubes.

    Cohen-Or, Meytal / Chapnik, Nava / Froy, Oren

    Food & function

    2024  Volume 15, Issue 8, Page(s) 4389–4398

    Abstract: β-Hydroxy-β-methylbutyrate (HMB) is a breakdown product of leucine, which promotes muscle growth. Although some studies indicate that HMB activates AKT and mTOR, others show activation of the downstream effectors, P70S6K and S6, independent of mTOR. Our ... ...

    Abstract β-Hydroxy-β-methylbutyrate (HMB) is a breakdown product of leucine, which promotes muscle growth. Although some studies indicate that HMB activates AKT and mTOR, others show activation of the downstream effectors, P70S6K and S6, independent of mTOR. Our aim was to study the metabolic effect of HMB around the circadian clock in order to determine more accurately the signaling pathway involved. C2C12 myotubes were treated with HMB and clock, metabolic and myogenic markers were measured around the clock. HMB-treated C2C12 myotubes showed no activation of AKT and mTOR, but did show activation of P70S6K and S6. Activation of P70S6K and S6 was also found when myotubes were treated with HMB combined with metformin, an indirect mTOR inhibitor, or rapamycin, a direct mTOR inhibitor. The activation of the P70S6K and S6 independent of AKT and mTOR, was accompanied by increased activation of phospholipase D2 (PLD). In addition, HMB led to high amplitude and advanced circadian rhythms. In conclusion, HMB induces myogenesis in C2C12 by activating P70S6K and S6
    MeSH term(s) Valerates/pharmacology ; Animals ; Muscle Fibers, Skeletal/drug effects ; Muscle Fibers, Skeletal/metabolism ; Mice ; Phospholipase D/metabolism ; Circadian Rhythm/drug effects ; Cell Line ; Ribosomal Protein S6 Kinases, 70-kDa/metabolism ; TOR Serine-Threonine Kinases/metabolism ; Signal Transduction/drug effects ; Proto-Oncogene Proteins c-akt/metabolism ; Muscle Development/drug effects
    Chemical Substances Valerates ; beta-hydroxyisovaleric acid (3F752311CD) ; Phospholipase D (EC 3.1.4.4) ; phospholipase D2 (EC 3.1.4.-) ; Ribosomal Protein S6 Kinases, 70-kDa (EC 2.7.11.1) ; TOR Serine-Threonine Kinases (EC 2.7.11.1) ; Proto-Oncogene Proteins c-akt (EC 2.7.11.1)
    Language English
    Publishing date 2024-04-22
    Publishing country England
    Document type Journal Article
    ZDB-ID 2612033-1
    ISSN 2042-650X ; 2042-6496
    ISSN (online) 2042-650X
    ISSN 2042-6496
    DOI 10.1039/d3fo04174c
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Using deep neural networks to disentangle visual and semantic information in human perception and memory.

    Shoham, Adva / Grosbard, Idan Daniel / Patashnik, Or / Cohen-Or, Daniel / Yovel, Galit

    Nature human behaviour

    2024  Volume 8, Issue 4, Page(s) 702–717

    Abstract: Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural ... ...

    Abstract Mental representations of familiar categories are composed of visual and semantic information. Disentangling the contributions of visual and semantic information in humans is challenging because they are intermixed in mental representations. Deep neural networks that are trained either on images or on text or by pairing images and text enable us now to disentangle human mental representations into their visual, visual-semantic and semantic components. Here we used these deep neural networks to uncover the content of human mental representations of familiar faces and objects when they are viewed or recalled from memory. The results show a larger visual than semantic contribution when images are viewed and a reversed pattern when they are recalled. We further reveal a previously unknown unique contribution of an integrated visual-semantic representation in both perception and memory. We propose a new framework in which visual and semantic information contribute independently and interactively to mental representations in perception and memory.
    MeSH term(s) Humans ; Semantics ; Female ; Male ; Mental Recall/physiology ; Neural Networks, Computer ; Visual Perception/physiology ; Adult ; Young Adult ; Recognition, Psychology/physiology ; Facial Recognition/physiology ; Memory/physiology
    Language English
    Publishing date 2024-02-08
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ISSN 2397-3374
    ISSN (online) 2397-3374
    DOI 10.1038/s41562-024-01816-9
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Delta Denoising Score

    Hertz, Amir / Aberman, Kfir / Cohen-Or, Daniel

    2023  

    Abstract: We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image ...

    Abstract We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image diffusion models and can be used as a loss term in an optimization problem to steer an image towards a desired direction dictated by a text. DDS utilizes the Score Distillation Sampling (SDS) mechanism for the purpose of image editing. We show that using only SDS often produces non-detailed and blurry outputs due to noisy gradients. To address this issue, DDS uses a prompt that matches the input image to identify and remove undesired erroneous directions of SDS. Our key premise is that SDS should be zero when calculated on pairs of matched prompts and images, meaning that if the score is non-zero, its gradients can be attributed to the erroneous component of SDS. Our analysis demonstrates the competence of DDS for text based image-to-image translation. We further show that DDS can be used to train an effective zero-shot image translation model. Experimental results indicate that DDS outperforms existing methods in terms of stability and quality, highlighting its potential for real-world applications in text-based image editing.

    Comment: Project page: https://delta-denoising-score.github.io/
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-04-14
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Facial Reenactment Through a Personalized Generator

    Elazary, Ariel / Nitzan, Yotam / Cohen-Or, Daniel

    2023  

    Abstract: In recent years, the role of image generative models in facial reenactment has been steadily increasing. Such models are usually subject-agnostic and trained on domain-wide datasets. The appearance of the reenacted individual is learned from a single ... ...

    Abstract In recent years, the role of image generative models in facial reenactment has been steadily increasing. Such models are usually subject-agnostic and trained on domain-wide datasets. The appearance of the reenacted individual is learned from a single image, and hence, the entire breadth of the individual's appearance is not entirely captured, leading these methods to resort to unfaithful hallucination. Thanks to recent advancements, it is now possible to train a personalized generative model tailored specifically to a given individual. In this paper, we propose a novel method for facial reenactment using a personalized generator. We train the generator using frames from a short, yet varied, self-scan video captured using a simple commodity camera. Images synthesized by the personalized generator are guaranteed to preserve identity. The premise of our work is that the task of reenactment is thus reduced to accurately mimicking head poses and expressions. To this end, we locate the desired frames in the latent space of the personalized generator using carefully designed latent optimization. Through extensive evaluation, we demonstrate state-of-the-art performance for facial reenactment. Furthermore, we show that since our reenactment takes place in a semantic latent space, it can be semantically edited and stylized in post-processing.

    Comment: Project webpage: https://arielazary.github.io/PGR/
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-07-12
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Sticky Links: Encoding Quantitative Data of Graph Edges.

    Lu, Min / Zeng, Xiangfang / Lanir, Joel / Sun, Xiaoqin / Li, Guozheng / Cohen-Or, Daniel / Huang, Hui

    IEEE transactions on visualization and computer graphics

    2024  Volume PP

    Abstract: Visually encoding quantitative information associated with graph links is an important problem in graph visualization. A conventional approach is to vary the thickness of lines to encode the strength of connections in node-link diagrams. In this paper, ... ...

    Abstract Visually encoding quantitative information associated with graph links is an important problem in graph visualization. A conventional approach is to vary the thickness of lines to encode the strength of connections in node-link diagrams. In this paper, we present Sticky Links, a novel visual encoding method that draws graph links with stickiness. Taking the metaphor of links with glues, sticky links represent connection strength using spiky shapes, ranging from two broken spikes for weak connections to connected lines for strong connections. We conducted a controlled user study to compare the efficiency and aesthetic appeal of stickiness with conventional thickness encoding. Our results show that stickiness enables more effective and expressive quantitative encoding while maintaining the perception of node connectivity. Participants also found sticky links to be more aesthetic and less visually cluttering than conventional thickness encoding. Overall, our findings suggest that sticky links offer a promising alternative to conventional methods for encoding quantitative information in graphs.
    Language English
    Publishing date 2024-04-22
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2024.3388562
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Rhythm is a Dancer: Music-Driven Motion Synthesis With Global Structure.

    Aristidou, Andreas / Yiannakidis, Anastasios / Aberman, Kfir / Cohen-Or, Daniel / Shamir, Ariel / Chrysanthou, Yiorgos

    IEEE transactions on visualization and computer graphics

    2023  Volume 29, Issue 8, Page(s) 3519–3534

    Abstract: Synthesizing human motion with a global structure, such as a choreography, is a challenging task. Existing methods tend to concentrate on local smooth pose transitions and neglect the global context or the theme of the motion. In this work, we present a ... ...

    Abstract Synthesizing human motion with a global structure, such as a choreography, is a challenging task. Existing methods tend to concentrate on local smooth pose transitions and neglect the global context or the theme of the motion. In this work, we present a music-driven motion synthesis framework that generates long-term sequences of human motions which are synchronized with the input beats, and jointly form a global structure that respects a specific dance genre. In addition, our framework enables generation of diverse motions that are controlled by the content of the music, and not only by the beat. Our music-driven dance synthesis framework is a hierarchical system that consists of three levels: pose, motif, and choreography. The pose level consists of an LSTM component that generates temporally coherent sequences of poses. The motif level guides sets of consecutive poses to form a movement that belongs to a specific distribution using a novel motion perceptual-loss. And the choreography level selects the order of the performed movements and drives the system to follow the global structure of a dance genre. Our results demonstrate the effectiveness of our music-driven framework to generate natural and consistent movements on various dance types, having control over the content of the synthesized motions, and respecting the overall structure of the dance.
    MeSH term(s) Humans ; Music ; Auditory Perception ; Computer Graphics ; Movement ; Dancing
    Language English
    Publishing date 2023-06-29
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2022.3163676
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Generating Non-Stationary Textures using Self-Rectification

    Zhou, Yang / Xiao, Rongjun / Lischinski, Dani / Cohen-Or, Daniel / Huang, Hui

    2024  

    Abstract: This paper addresses the challenge of example-based non-stationary texture synthesis. We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools, yielding an initial rough target for the ... ...

    Abstract This paper addresses the challenge of example-based non-stationary texture synthesis. We introduce a novel twostep approach wherein users first modify a reference texture using standard image editing tools, yielding an initial rough target for the synthesis. Subsequently, our proposed method, termed "self-rectification", automatically refines this target into a coherent, seamless texture, while faithfully preserving the distinct visual characteristics of the reference exemplar. Our method leverages a pre-trained diffusion network, and uses self-attention mechanisms, to gradually align the synthesized texture with the reference, ensuring the retention of the structures in the provided target. Through experimental validation, our approach exhibits exceptional proficiency in handling non-stationary textures, demonstrating significant advancements in texture synthesis when compared to existing state-of-the-art techniques. Code is available at https://github.com/xiaorongjun000/Self-Rectification

    Comment: Project page: https://github.com/xiaorongjun000/Self-Rectification
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2024-01-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: A Neural Space-Time Representation for Text-to-Image Personalization

    Alaluf, Yuval / Richardson, Elad / Metzer, Gal / Cohen-Or, Daniel

    2023  

    Abstract: A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the ... ...

    Abstract A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the learned concept. In this paper, we explore a new text-conditioning space that is dependent on both the denoising process timestep (time) and the denoising U-Net layers (space) and showcase its compelling properties. A single concept in the space-time representation is composed of hundreds of vectors, one for each combination of time and space, making this space challenging to optimize directly. Instead, we propose to implicitly represent a concept in this space by optimizing a small neural mapper that receives the current time and space parameters and outputs the matching token embedding. In doing so, the entire personalized concept is represented by the parameters of the learned mapper, resulting in a compact, yet expressive, representation. Similarly to other personalization methods, the output of our neural mapper resides in the input space of the text encoder. We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder. Finally, we show how one can impose an importance-based ordering over our implicit representation, providing users control over the reconstruction and editability of the learned concept using a single trained model. We demonstrate the effectiveness of our approach over a range of concepts and prompts, showing our method's ability to generate high-quality and controllable compositions without fine-tuning any parameters of the generative model itself.

    Comment: Project page available at https://neuraltextualinversion.github.io/NeTI/
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-05-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Concept Decomposition for Visual Exploration and Inspiration

    Vinker, Yael / Voynov, Andrey / Cohen-Or, Daniel / Shamir, Ariel

    2023  

    Abstract: A creative idea is often born from transforming, combining, and modifying ideas from existing visual examples capturing various concepts. However, one cannot simply copy the concept as a whole, and inspiration is achieved by examining certain aspects of ... ...

    Abstract A creative idea is often born from transforming, combining, and modifying ideas from existing visual examples capturing various concepts. However, one cannot simply copy the concept as a whole, and inspiration is achieved by examining certain aspects of the concept. Hence, it is often necessary to separate a concept into different aspects to provide new perspectives. In this paper, we propose a method to decompose a visual concept, represented as a set of images, into different visual aspects encoded in a hierarchical tree structure. We utilize large vision-language models and their rich latent space for concept decomposition and generation. Each node in the tree represents a sub-concept using a learned vector embedding injected into the latent space of a pretrained text-to-image model. We use a set of regularizations to guide the optimization of the embedding vectors encoded in the nodes to follow the hierarchical structure of the tree. Our method allows to explore and discover new concepts derived from the original one. The tree provides the possibility of endless visual sampling at each node, allowing the user to explore the hidden sub-concepts of the object of interest. The learned aspects in each node can be combined within and across trees to create new visual ideas, and can be used in natural language sentences to apply such aspects to new designs.

    Comment: https://inspirationtree.github.io/inspirationtree/
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-05-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: P+

    Voynov, Andrey / Chu, Qinghao / Cohen-Or, Daniel / Aberman, Kfir

    Extended Textual Conditioning in Text-to-Image Generation

    2023  

    Abstract: We introduce an Extended Textual Conditioning space in text-to-image models, referred to as $P+$. This space consists of multiple textual conditions, derived from per-layer prompts, each corresponding to a layer of the denoising U-net of the diffusion ... ...

    Abstract We introduce an Extended Textual Conditioning space in text-to-image models, referred to as $P+$. This space consists of multiple textual conditions, derived from per-layer prompts, each corresponding to a layer of the denoising U-net of the diffusion model. We show that the extended space provides greater disentangling and control over image synthesis. We further introduce Extended Textual Inversion (XTI), where the images are inverted into $P+$, and represented by per-layer tokens. We show that XTI is more expressive and precise, and converges faster than the original Textual Inversion (TI) space. The extended inversion method does not involve any noticeable trade-off between reconstruction and editability and induces more regular inversions. We conduct a series of extensive experiments to analyze and understand the properties of the new space, and to showcase the effectiveness of our method for personalizing text-to-image models. Furthermore, we utilize the unique properties of this space to achieve previously unattainable results in object-style mixing using text-to-image models. Project page: https://prompt-plus.github.io
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Computation and Language ; Computer Science - Graphics ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2023-03-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top