LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 389

Search options

  1. Article: Roy Schafer: a beginning.

    Schwartz, Henry P

    The Psychoanalytic quarterly

    2013  Volume 82, Issue 1, Page(s) 1–7

    Abstract: The author provides a biographical overview of Schafer's life, culled from his published work and focused primarily on his professional development. This biography is used to demonstrate some of Schafer's central theoretical insights on narrativity and ... ...

    Abstract The author provides a biographical overview of Schafer's life, culled from his published work and focused primarily on his professional development. This biography is used to demonstrate some of Schafer's central theoretical insights on narrativity and language, and reveals the consistency of his thinking over his long career. A brief discussion of his writing on King Lear provides a bridge between theoretical and biographical material.
    MeSH term(s) History, 20th Century ; Humans ; Philosophy/history ; Psychoanalytic Interpretation ; Psychoanalytic Theory ; Psychology/history ; United States
    Language English
    Publishing date 2013-01
    Publishing country United States
    Document type Biography ; Historical Article
    ZDB-ID 207522-2
    ISSN 2167-4086 ; 0033-2828
    ISSN (online) 2167-4086
    ISSN 0033-2828
    DOI 10.1002/j.2167-4086.2013.00001.x
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Labour in a Single Shot : Critical Perspectives on Antje Ehmann and Harun Farocki's Global Video Project

    Grundmann, Roy / Schwartz, Peter / Williams, Gregory / Schwartz, Peter

    2022  

    Keywords Electronic, holographic & video art ; Film theory & criticism ; Documentary films ; Media studies ; Video, documentary, labour, global, art
    Size 1 electronic resource (428 pages)
    Publisher Amsterdam University Press
    Publishing place Amsterdam
    Document type Book ; Online
    Note English ; Open Access
    HBZ-ID HT021231649
    ISBN 9789463722421 ; 9463722424
    Database ZB MED Catalogue: Medicine, Health, Nutrition, Environment, Agriculture

    More links

    Kategorien

  3. Article ; Online: Practical Guide to Experimental and Quasi-Experimental Research in Surgical Education.

    Phitayakorn, Roy / Schwartz, Todd A / Doherty, Gerard M

    JAMA surgery

    2024  

    Language English
    Publishing date 2024-01-03
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2701841-6
    ISSN 2168-6262 ; 2168-6254
    ISSN (online) 2168-6262
    ISSN 2168-6254
    DOI 10.1001/jamasurg.2023.6693
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article: Patient-Reported Outcome Measures of Psychosocial Quality of Life in Oropharyngeal Cancer Patients: A Scoping Review.

    Silver, Jennifer A / Schwartz, Russell / Roy, Catherine F / Sadeghi, Nader / Henry, Melissa

    Journal of clinical medicine

    2023  Volume 12, Issue 6

    Abstract: Background: ...

    Abstract Background:
    Language English
    Publishing date 2023-03-08
    Publishing country Switzerland
    Document type Journal Article ; Review
    ZDB-ID 2662592-1
    ISSN 2077-0383
    ISSN 2077-0383
    DOI 10.3390/jcm12062122
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: Fighting Bias with Bias

    Reif, Yuval / Schwartz, Roy

    Promoting Model Robustness by Amplifying Dataset Biases

    2023  

    Abstract: NLP models often rely on superficial cues known as dataset biases to achieve impressive performance, and can fail on examples where these biases do not hold. Recent work sought to develop robust, unbiased models by filtering biased examples from training ...

    Abstract NLP models often rely on superficial cues known as dataset biases to achieve impressive performance, and can fail on examples where these biases do not hold. Recent work sought to develop robust, unbiased models by filtering biased examples from training sets. In this work, we argue that such filtering can obscure the true capabilities of models to overcome biases, which might never be removed in full from the dataset. We suggest that in order to drive the development of models robust to subtle biases, dataset biases should be amplified in the training set. We introduce an evaluation framework defined by a bias-amplified training set and an anti-biased test set, both automatically extracted from existing datasets. Experiments across three notions of bias, four datasets and two models show that our framework is substantially more challenging for models than the original data splits, and even more challenging than hand-crafted challenge sets. Our evaluation framework can use any existing dataset, even those considered obsolete, to test model robustness. We hope our work will guide the development of robust models that do not rely on superficial biases and correlations. To this end, we publicly release our code and data.

    Comment: Findings of ACL 2023
    Keywords Computer Science - Computation and Language
    Subject code 006
    Publishing date 2023-05-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Transformers are Multi-State RNNs

    Oren, Matanel / Hassid, Michael / Adi, Yossi / Schwartz, Roy

    2024  

    Abstract: ... schwartz-lab-NLP/TOVA. ... Comment: preprint ...

    Abstract Transformers are considered conceptually different compared to the previous generation of state-of-the-art NLP models - recurrent neural networks (RNNs). In this work, we demonstrate that decoder-only transformers can in fact be conceptualized as infinite multi-state RNNs - an RNN variant with unlimited hidden state size. We further show that pretrained transformers can be converted into $\textit{finite}$ multi-state RNNs by fixing the size of their hidden state. We observe that several existing transformers cache compression techniques can be framed as such conversion policies, and introduce a novel policy, TOVA, which is simpler compared to these policies. Our experiments with several long range tasks indicate that TOVA outperforms all other baseline policies, while being nearly on par with the full (infinite) model, and using in some cases only $\frac{1}{8}$ of the original cache size. Our results indicate that transformer decoder LLMs often behave in practice as RNNs. They also lay out the option of mitigating one of their most painful computational bottlenecks - the size of their cache memory. We publicly release our code at https://github.com/schwartz-lab-NLP/TOVA.

    Comment: preprint
    Keywords Computer Science - Computation and Language
    Subject code 006
    Publishing date 2024-01-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Data Contamination

    Magar, Inbal / Schwartz, Roy

    From Memorization to Exploitation

    2022  

    Abstract: Pretrained language models are typically trained on massive web-based datasets, which are often "contaminated" with downstream test sets. It is not clear to what extent models exploit the contaminated data for downstream tasks. We present a principled ... ...

    Abstract Pretrained language models are typically trained on massive web-based datasets, which are often "contaminated" with downstream test sets. It is not clear to what extent models exploit the contaminated data for downstream tasks. We present a principled method to study this question. We pretrain BERT models on joint corpora of Wikipedia and labeled downstream datasets, and fine-tune them on the relevant task. Comparing performance between samples seen and unseen during pretraining enables us to define and quantify levels of memorization and exploitation. Experiments with two models and three downstream tasks show that exploitation exists in some cases, but in others the models memorize the contaminated data, but do not exploit it. We show that these two measures are affected by different factors such as the number of duplications of the contaminated data and the model size. Our results highlight the importance of analyzing massive web-scale datasets to verify that progress in NLP is obtained by better language understanding and not better data exploitation.

    Comment: Accepted to ACL 2022
    Keywords Computer Science - Computation and Language ; Computer Science - Machine Learning
    Publishing date 2022-03-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: On the Limitations of Dataset Balancing

    Schwartz, Roy / Stanovsky, Gabriel

    The Lost Battle Against Spurious Correlations

    2022  

    Abstract: Recent work has shown that deep learning models in NLP are highly sensitive to low-level correlations between simple features and specific output labels, leading to overfitting and lack of generalization. To mitigate this problem, a common practice is to ...

    Abstract Recent work has shown that deep learning models in NLP are highly sensitive to low-level correlations between simple features and specific output labels, leading to overfitting and lack of generalization. To mitigate this problem, a common practice is to balance datasets by adding new instances or by filtering out "easy" instances (Sakaguchi et al., 2020), culminating in a recent proposal to eliminate single-word correlations altogether (Gardner et al., 2021). In this opinion paper, we identify that despite these efforts, increasingly-powerful models keep exploiting ever-smaller spurious correlations, and as a result even balancing all single-word features is insufficient for mitigating all of these correlations. In parallel, a truly balanced dataset may be bound to "throw the baby out with the bathwater" and miss important signal encoding common sense and world knowledge. We highlight several alternatives to dataset balancing, focusing on enhancing datasets with richer contexts, allowing models to abstain and interact with users, and turning from large-scale fine-tuning to zero- or few-shot setups.

    Comment: Findings of NAACL 2022
    Keywords Computer Science - Computation and Language
    Subject code 006
    Publishing date 2022-04-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Almost Logarithmic Approximation for Cutwidth and Pathwidth

    Bansal, Nikhil / Katzelnick, Dor / Schwartz, Roy

    2023  

    Abstract: We study several graph layout problems with a min max objective. Here, given a graph we wish to find a linear ordering of the vertices that minimizes some worst case objective over the natural cuts in the ordering; which separate an initial segment of ... ...

    Abstract We study several graph layout problems with a min max objective. Here, given a graph we wish to find a linear ordering of the vertices that minimizes some worst case objective over the natural cuts in the ordering; which separate an initial segment of the vertices from the rest. A prototypical problem here is cutwidth, where we want to minimize the maximum number of edges crossing a cut. The only known algorithm here is by [Leighton-Rao J.ACM 99] based on recursively partitioning the graph using balanced cuts. This achieves an $O(\log^{3/2}{n})$ approximation using the $ O(\log^{1/2}{n})$ approximation of [Arora-Rao-Vazirani J.ACM 09] for balanced cuts. We depart from the above approach and give an improved $ O(\log^{1+o(1)}{n})$ approximation for cutwidth. Our approach also gives a similarly improved $ O(\log^{1+o(1)}{n})$ approximation for finding the pathwidth of a graph. Previously, the best known approximation for pathwidth was $O(\log^{3/2}{n})$.
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 005 ; 511
    Publishing date 2023-11-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Article ; Online: Letters.

    Brady, Jason / Fonner, Andrea / Creech, Joseph / Ganzberg, Steven / Phero, James C / Reed, Kenneth L / Rosenberg, Morton / Schwartz, Paul J / Stevens, Roy L

    Journal of the American Dental Association (1939)

    2023  Volume 154, Issue 8, Page(s) 694–695

    Language English
    Publishing date 2023-06-21
    Publishing country England
    Document type Letter ; Comment
    ZDB-ID 220622-5
    ISSN 1943-4723 ; 0002-8177 ; 1048-6364
    ISSN (online) 1943-4723
    ISSN 0002-8177 ; 1048-6364
    DOI 10.1016/j.adaj.2023.05.012
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top