LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 9 of total 9

Search options

  1. Article ; Online: Time Since Dose and Dietary Vitamin A Intake Affect Tracer Mixing in the 13C-Retinol Isotope Dilution Test in Male Rats.

    Sheftel, Jesse / Smith, Jordan B / Tanumihardjo, Sherry A

    The Journal of nutrition

    2022  Volume 152, Issue 6, Page(s) 1582–1591

    Abstract: Background: Retinol isotope dilution (RID) estimates total liver vitamin A reserves (TLRs), the gold-standard vitamin A (VA) biomarker. RID equation assumptions are based on limited data.: Objectives: We measured the impact of tracer choice, mixing ... ...

    Abstract Background: Retinol isotope dilution (RID) estimates total liver vitamin A reserves (TLRs), the gold-standard vitamin A (VA) biomarker. RID equation assumptions are based on limited data.
    Objectives: We measured the impact of tracer choice, mixing period, and VA intake on tracer mixing [ratio of tracer enrichment in serum to that in liver stores (S)] in VA-deficient, -adequate, and hypervitaminotic rats.
    Methods: Study 1 was a 3 × 2 × 3 design (18 groups, n = 5/group). Male Sprague-Dawley rats (21 d old) received 50, 100, or 3500 nmol VA/d for 21 d, were administered 52 nmol 13C2- or 13C10-retinyl acetate orally, and killed 5, 10, or 15 d later. Unlabeled VA (50 nmol/d) was given on days 11-14. Study 2 used 100 nmol VA/d for 21 d with 3 groups (n = 6-7): 52 nmol 13C2- or 13C10-retinyl acetate and 100 nmol VA/d throughout 14-d mixing, or 13C2-retinyl acetate without VA. Repeated-measures, 1-factor, and 3-factor ANOVAs were used for analysis.
    Results: Mean ± SD TLRs (μmol/g liver) reflected intake: 0.11 ± 0.04 (50 nmol VA/d), 0.16 ± 0.04 (100 nmol VA/d), and 5.07 ± 1.58 (3500 nmol VA/d) in Study 1 and 0.24 ± 0.08 (100 nmol VA/d) in Study 2. In Study 1, mean ± SD S was 1.65 ± 0.26 (5 d), 1.16 ± 0.09 (10 d), and 0.92 ± 0.08 (15 d). The interactions tracer*VA intake and time*VA intake were significant between days 10 and 15 (P < 0.05). In Study 2, mean ± SD S was 1.07 ± 0.02 without VA during mixing, and 0.81 ± 0.04 (13C2) and 0.79 ± 0.03 (13C10) with VA intake throughout. Estimated:measured TLRs varied by VA intake and time in Study 1 but not between groups in Study 2.
    Conclusions: The 13C-content effect on RID through S is inconsistent. S is highly variable at 5 d, contraindicating early-time point RID. VA intake effects on S vary with timing and quantity. Assuming S = 0.8 at 14 d with consistent VA intake in human studies is likely appropriate.
    MeSH term(s) Animals ; Carbon Isotopes ; Liver ; Male ; Rats ; Rats, Sprague-Dawley ; Vitamin A ; Vitamin A Deficiency
    Chemical Substances Carbon Isotopes ; Vitamin A (11103-57-4) ; Carbon-13 (FDJ0A8596D)
    Language English
    Publishing date 2022-03-08
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 218373-0
    ISSN 1541-6100 ; 0022-3166
    ISSN (online) 1541-6100
    ISSN 0022-3166
    DOI 10.1093/jn/nxac051
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: MuSFA

    Wang, Ju-Chiang / Smith, Jordan B. L. / Hung, Yun-Ning

    Improving Music Structural Function Analysis with Partially Labeled Data

    2022  

    Abstract: Music structure analysis (MSA) systems aim to segment a song recording into non-overlapping sections with useful labels. Previous MSA systems typically predict abstract labels in a post-processing step and require the full context of the song. By ... ...

    Abstract Music structure analysis (MSA) systems aim to segment a song recording into non-overlapping sections with useful labels. Previous MSA systems typically predict abstract labels in a post-processing step and require the full context of the song. By contrast, we recently proposed a supervised framework, called "Music Structural Function Analysis" (MuSFA), that models and predicts meaningful labels like 'verse' and 'chorus' directly from audio, without requiring the full context of a song. However, the performance of this system depends on the amount and quality of training data. In this paper, we propose to repurpose a public dataset, HookTheory Lead Sheet Dataset (HLSD), to improve the performance. HLSD contains over 18K excerpts of music sections originally collected for studying automatic melody harmonization. We treat each excerpt as a partially labeled song and provide a label mapping, so that HLSD can be used together with other public datasets, such as SALAMI, RWC, and Isophonics. In cross-dataset evaluations, we find that including HLSD in training can improve state-of-the-art boundary detection and section labeling scores by ~3% and ~1% respectively.

    Comment: ISMIR2022, LBD paper
    Keywords Computer Science - Sound ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 780
    Publishing date 2022-11-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: To catch a chorus, verse, intro, or anything else

    Wang, Ju-Chiang / Hung, Yun-Ning / Smith, Jordan B. L.

    Analyzing a song with structural functions

    2022  

    Abstract: Conventional music structure analysis algorithms aim to divide a song into segments and to group them with abstract labels (e.g., 'A', 'B', and 'C'). However, explicitly identifying the function of each segment (e.g., 'verse' or 'chorus') is rarely ... ...

    Abstract Conventional music structure analysis algorithms aim to divide a song into segments and to group them with abstract labels (e.g., 'A', 'B', and 'C'). However, explicitly identifying the function of each segment (e.g., 'verse' or 'chorus') is rarely attempted, but has many applications. We introduce a multi-task deep learning framework to model these structural semantic labels directly from audio by estimating "verseness," "chorusness," and so forth, as a function of time. We propose a 7-class taxonomy (i.e., intro, verse, chorus, bridge, outro, instrumental, and silence) and provide rules to consolidate annotations from four disparate datasets. We also propose to use a spectral-temporal Transformer-based model, called SpecTNT, which can be trained with an additional connectionist temporal localization (CTL) loss. In cross-dataset evaluations using four public datasets, we demonstrate the effectiveness of the SpecTNT model and CTL loss, and obtain strong results overall: the proposed system outperforms state-of-the-art chorus-detection and boundary-detection methods at detecting choruses and boundaries, respectively.

    Comment: This manuscript is accepted by ICASSP 2022
    Keywords Electrical Engineering and Systems Science - Audio and Speech Processing ; Computer Science - Sound
    Subject code 006
    Publishing date 2022-05-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Anthocyanin and Lycopene Contents Do Not Affect β-Carotene Bioefficacy from Multicolored Carrots (Daucus carota L.) in Male Mongolian Gerbils.

    Kaeppler, Mikayla S / Smith, Jordan B / Davis, Christopher R / Simon, Philipp W / Tanumihardjo, Sherry A

    The Journal of nutrition

    2022  Volume 153, Issue 1, Page(s) 76–87

    Abstract: Background: Anthocyanins and carotenoids are phytochemicals that may benefit health through provitamin A carotenoid (PAC), antioxidant, and anti-inflammatory activities. These bioactives may mitigate chronic diseases. Consumption of multiple ... ...

    Abstract Background: Anthocyanins and carotenoids are phytochemicals that may benefit health through provitamin A carotenoid (PAC), antioxidant, and anti-inflammatory activities. These bioactives may mitigate chronic diseases. Consumption of multiple phytochemicals may impact bioactivity in synergistic or antagonistic manners.
    Objectives: Two studies in weanling male Mongolian gerbils assessed the relative bioefficacy of β-carotene equivalents (BCEs) to vitamin A (VA) with simultaneous consumption of the non-PAC lycopene or anthocyanins from multicolored carrots.
    Methods: After 3-wk VA depletion, 5-6 gerbils were killed as baseline groups. The remaining gerbils were divided into 4 carrot treatment groups; the positive control group received retinyl acetate and the negative control group was given vehicle soybean oil (n = 10/group; n = 60/study). In the lycopene study, gerbils consumed feed varying in lycopene sourced from red carrots. In the anthocyanin study, gerbils consumed feed varying in anthocyanin content sourced from purple-red carrots, and positive controls received lycopene. Treatment feeds had equalized BCEs: 5.59 ± 0.96 μg/g (lycopene study) and 7.02 ± 0.39 μg/g (anthocyanin study). Controls consumed feeds without pigments. Serum, liver, and lung samples were analyzed for retinol and carotenoid concentrations using HPLC. Data were analyzed by ANOVA and Tukey's studentized range test.
    Results: In the lycopene study, liver VA did not differ between groups (0.11 ± 0.07 μmol/g) indicating no effect of varying lycopene content. In the anthocyanin study, liver VA concentrations in the medium-to-high (0.22 ± 0.14 μmol/g) and medium-to-low anthocyanin (0.25 ± 0.07 μmol/g) groups were higher than the negative control (0.11 ± 0.07 μmol/g) (P < 0.05). All treatment groups maintained baseline VA concentrations (0.23 ± 0.06 μmol/g). Combining studies, serum retinol had 12% sensitivity to predict VA deficiency, defined as 0.7 μmol/L.
    Conclusions: These gerbil studies suggested that simultaneous consumption of carotenoids and anthocyanins does not impact relative BCE bioefficacy. Breeding carrots for enhanced pigments to improve dietary intake should continue.
    MeSH term(s) Animals ; Male ; beta Carotene ; Vitamin A ; Daucus carota/chemistry ; Anthocyanins/pharmacology ; Lycopene ; Gerbillinae ; Carotenoids
    Chemical Substances beta Carotene (01YAE03M7J) ; Vitamin A (11103-57-4) ; Anthocyanins ; Lycopene (SB0N2N0WV6) ; Carotenoids (36-88-4)
    Language English
    Publishing date 2022-12-20
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural ; Research Support, U.S. Gov't, Non-P.H.S. ; Research Support, Non-U.S. Gov't
    ZDB-ID 218373-0
    ISSN 1541-6100 ; 0022-3166
    ISSN (online) 1541-6100
    ISSN 0022-3166
    DOI 10.1016/j.tjnut.2022.10.010
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: Modeling the Rhythm from Lyrics for Melody Generation of Pop Song

    Zhang, Daiyu / Wang, Ju-Chiang / Kosta, Katerina / Smith, Jordan B. L. / Zhou, Shicen

    2023  

    Abstract: Creating a pop song melody according to pre-written lyrics is a typical practice for composers. A computational model of how lyrics are set as melodies is important for automatic composition systems, but an end-to-end lyric-to-melody model would require ... ...

    Abstract Creating a pop song melody according to pre-written lyrics is a typical practice for composers. A computational model of how lyrics are set as melodies is important for automatic composition systems, but an end-to-end lyric-to-melody model would require enormous amounts of paired training data. To mitigate the data constraints, we adopt a two-stage approach, dividing the task into lyric-to-rhythm and rhythm-to-melody modules. However, the lyric-to-rhythm task is still challenging due to its multimodality. In this paper, we propose a novel lyric-to-rhythm framework that includes part-of-speech tags to achieve better text setting, and a Transformer architecture designed to model long-term syllable-to-note associations. For the rhythm-to-melody task, we adapt a proven chord-conditioned melody Transformer, which has achieved state-of-the-art results. Experiments for Chinese lyric-to-melody generation show that the proposed framework is able to model key characteristics of rhythm and pitch distributions in the dataset, and in a subjective evaluation, the melodies generated by our system were rated as similar to or better than those of a state-of-the-art alternative.

    Comment: Published in ISMIR 2022
    Keywords Electrical Engineering and Systems Science - Audio and Speech Processing ; Computer Science - Sound
    Subject code 780
    Publishing date 2023-01-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Neural Loop Combiner

    Chen, Bo-Yu / Smith, Jordan B. L. / Yang, Yi-Hsuan

    Neural Network Models for Assessing the Compatibility of Loops

    2020  

    Abstract: Music producers who use loops may have access to thousands in loop libraries, but finding ones that are compatible is a time-consuming process; we hope to reduce this burden with automation. State-of-the-art systems for estimating compatibility, such as ... ...

    Abstract Music producers who use loops may have access to thousands in loop libraries, but finding ones that are compatible is a time-consuming process; we hope to reduce this burden with automation. State-of-the-art systems for estimating compatibility, such as AutoMashUpper, are mostly rule-based and could be improved on with machine learning. To train a model, we need a large set of loops with ground truth compatibility values. No such dataset exists, so we extract loops from existing music to obtain positive examples of compatible loops, and propose and compare various strategies for choosing negative examples. For reproducibility, we curate data from the Free Music Archive. Using this data, we investigate two types of model architectures for estimating the compatibility of loops: one based on a Siamese network, and the other a pure convolutional neural network (CNN). We conducted a user study in which participants rated the quality of the combinations suggested by each model, and found the CNN to outperform the Siamese network. Both model-based approaches outperformed the rule-based one. We have opened source the code for building the models and the dataset.

    Comment: Accepted to the 21st International Society for Music Information Retrieval Conference (ISMIR 2020)
    Keywords Computer Science - Sound ; Computer Science - Information Retrieval ; Computer Science - Machine Learning ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 006
    Publishing date 2020-08-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Supervised Metric Learning for Music Structure Features

    Wang, Ju-Chiang / Smith, Jordan B. L. / Lu, Wei-Tsung / Song, Xuchen

    2021  

    Abstract: Music structure analysis (MSA) methods traditionally search for musically meaningful patterns in audio: homogeneity, repetition, novelty, and segment-length regularity. Hand-crafted audio features such as MFCCs or chromagrams are often used to elicit ... ...

    Abstract Music structure analysis (MSA) methods traditionally search for musically meaningful patterns in audio: homogeneity, repetition, novelty, and segment-length regularity. Hand-crafted audio features such as MFCCs or chromagrams are often used to elicit these patterns. However, with more annotations of section labels (e.g., verse, chorus, and bridge) becoming available, one can use supervised feature learning to make these patterns even clearer and improve MSA performance. To this end, we take a supervised metric learning approach: we train a deep neural network to output embeddings that are near each other for two spectrogram inputs if both have the same section type (according to an annotation), and otherwise far apart. We propose a batch sampling scheme to ensure the labels in a training pair are interpreted meaningfully. The trained model extracts features that can be used in existing MSA algorithms. In evaluations with three datasets (HarmonixSet, SALAMI, and RWC), we demonstrate that using the proposed features can improve a traditional MSA algorithm significantly in both intra- and cross-dataset scenarios.

    Comment: This paper was accepted and presented at ISMIR 2021
    Keywords Electrical Engineering and Systems Science - Audio and Speech Processing ; Computer Science - Sound
    Subject code 006
    Publishing date 2021-10-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Modeling the Compatibility of Stem Tracks to Generate Music Mashups

    Huang, Jiawen / Wang, Ju-Chiang / Smith, Jordan B. L. / Song, Xuchen / Wang, Yuxuan

    2021  

    Abstract: A music mashup combines audio elements from two or more songs to create a new work. To reduce the time and effort required to make them, researchers have developed algorithms that predict the compatibility of audio elements. Prior work has focused on ... ...

    Abstract A music mashup combines audio elements from two or more songs to create a new work. To reduce the time and effort required to make them, researchers have developed algorithms that predict the compatibility of audio elements. Prior work has focused on mixing unaltered excerpts, but advances in source separation enable the creation of mashups from isolated stems (e.g., vocals, drums, bass, etc.). In this work, we take advantage of separated stems not just for creating mashups, but for training a model that predicts the mutual compatibility of groups of excerpts, using self-supervised and semi-supervised methods. Specifically, we first produce a random mashup creation pipeline that combines stem tracks obtained via source separation, with key and tempo automatically adjusted to match, since these are prerequisites for high-quality mashups. To train a model to predict compatibility, we use stem tracks obtained from the same song as positive examples, and random combinations of stems with key and/or tempo unadjusted as negative examples. To improve the model and use more data, we also train on "average" examples: random combinations with matching key and tempo, where we treat them as unlabeled data as their true compatibility is unknown. To determine whether the combined signal or the set of stem signals is more indicative of the quality of the result, we experiment on two model architectures and train them using semi-supervised learning technique. Finally, we conduct objective and subjective evaluations of the system, comparing them to a standard rule-based system.

    Comment: This is a preprint of the paper accepted by AAAI-21. Please cite the version included in the Proceedings of the 35th AAAI Conference on Artificial Intelligence
    Keywords Computer Science - Sound ; Computer Science - Artificial Intelligence ; Electrical Engineering and Systems Science - Audio and Speech Processing
    Subject code 780
    Publishing date 2021-03-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Supervised Chorus Detection for Popular Music Using Convolutional Neural Network and Multi-task Learning

    Wang, Ju-Chiang / Smith, Jordan B. L. / Chen, Jitong / Song, Xuchen / Wang, Yuxuan

    2021  

    Abstract: This paper presents a novel supervised approach to detecting the chorus segments in popular music. Traditional approaches to this task are mostly unsupervised, with pipelines designed to target some quality that is assumed to define "chorusness," which ... ...

    Abstract This paper presents a novel supervised approach to detecting the chorus segments in popular music. Traditional approaches to this task are mostly unsupervised, with pipelines designed to target some quality that is assumed to define "chorusness," which usually means seeking the loudest or most frequently repeated sections. We propose to use a convolutional neural network with a multi-task learning objective, which simultaneously fits two temporal activation curves: one indicating "chorusness" as a function of time, and the other the location of the boundaries. We also propose a post-processing method that jointly takes into account the chorus and boundary predictions to produce binary output. In experiments using three datasets, we compare our system to a set of public implementations of other segmentation and chorus-detection algorithms, and find our approach performs significantly better.

    Comment: This version is a preprint of an accepted paper by ICASSP2021. Please cite the publication in the Proceedings of IEEE International Conference on Acoustics, Speech, & Signal Processing
    Keywords Electrical Engineering and Systems Science - Audio and Speech Processing ; Computer Science - Artificial Intelligence ; Computer Science - Sound
    Subject code 006
    Publishing date 2021-03-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top