LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 12

Search options

  1. Article: Credibility of preprints: an interdisciplinary survey of researchers.

    Soderberg, Courtney K / Errington, Timothy M / Nosek, Brian A

    Royal Society open science

    2020  Volume 7, Issue 10, Page(s) 201520

    Abstract: Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal's reputation, selection, and peer-review processes that, ... ...

    Abstract Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal's reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.
    Keywords covid19
    Language English
    Publishing date 2020-10-28
    Publishing country England
    Document type Journal Article
    ZDB-ID 2787755-3
    ISSN 2054-5703
    ISSN 2054-5703
    DOI 10.1098/rsos.201520
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Power to Detect What? Considerations for Planning and Evaluating Sample Size.

    Giner-Sorolla, Roger / Montoya, Amanda K / Reifman, Alan / Carpenter, Tom / Lewis, Neil A / Aberson, Christopher L / Bostyn, Dries H / Conrique, Beverly G / Ng, Brandon W / Schoemann, Alexander M / Soderberg, Courtney

    Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc

    2024  , Page(s) 10888683241228328

    Abstract: Academic abstract: In the wake of the replication crisis, social and personality psychologists have increased attention to power analysis and the adequacy of sample sizes. In this article, we analyze current controversies in this area, including ... ...

    Abstract Academic abstract: In the wake of the replication crisis, social and personality psychologists have increased attention to power analysis and the adequacy of sample sizes. In this article, we analyze current controversies in this area, including choosing effect sizes, why and whether power analyses should be conducted on already-collected data, how to mitigate the negative effects of sample size criteria on specific kinds of research, and which power criterion to use. For novel research questions, we advocate that researchers base sample sizes on effects that are likely to be cost-effective for other people to implement (in applied settings) or to study (in basic research settings), given the limitations of interest-based minimums or field-wide effect sizes. We discuss two alternatives to power analysis, precision analysis and sequential analysis, and end with recommendations for improving the practices of researchers, reviewers, and journal editors in social-personality psychology.
    Public abstract: Recently, social-personality psychology has been criticized for basing some of its conclusions on studies with low numbers of participants. As a result, power analysis, a mathematical way to ensure that a study has enough participants to reliably "detect" a given size of psychological effect, has become popular. This article describes power analysis and discusses some controversies about it, including how researchers should derive assumptions about effect size, and how the requirements of power analysis can be applied without harming research on hard-to-reach and marginalized communities. For novel research questions, we advocate that researchers base sample sizes on effects that are likely to be cost-effective for other people to implement (in applied settings) or to study (in basic research settings). We discuss two alternatives to power analysis, precision analysis and sequential analysis, and end with recommendations for improving the practices of researchers, reviewers, and journal editors in social-personality psychology.
    Language English
    Publishing date 2024-02-12
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2022092-3
    ISSN 1532-7957 ; 1088-8683
    ISSN (online) 1532-7957
    ISSN 1088-8683
    DOI 10.1177/10888683241228328
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Credibility of preprints

    Courtney K. Soderberg / Timothy M. Errington / Brian A. Nosek

    Royal Society Open Science, Vol 7, Iss

    an interdisciplinary survey of researchers

    2020  Volume 10

    Abstract: Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal's reputation, selection, and peer-review processes that, ... ...

    Abstract Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal's reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.
    Keywords preprints ; credibility ; trust ; Science ; Q
    Subject code 306
    Language English
    Publishing date 2020-10-01T00:00:00Z
    Publisher The Royal Society
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Investigating the replicability of preclinical cancer biology.

    Errington, Timothy M / Mathur, Maya / Soderberg, Courtney K / Denis, Alexandria / Perfito, Nicole / Iorns, Elizabeth / Nosek, Brian A

    eLife

    2021  Volume 10

    Abstract: Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set ... ...

    Abstract Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary - the replication was either a success or a failure - and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
    MeSH term(s) Animals ; Biomedical Research/methods ; Humans ; Neoplasms ; Reproducibility of Results ; Research Design/standards
    Language English
    Publishing date 2021-12-07
    Publishing country England
    Document type Journal Article ; Meta-Analysis ; Research Support, Non-U.S. Gov't
    ZDB-ID 2687154-3
    ISSN 2050-084X ; 2050-084X
    ISSN (online) 2050-084X
    ISSN 2050-084X
    DOI 10.7554/eLife.71601
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Initial evidence of research quality of registered reports compared with the standard publishing model.

    Soderberg, Courtney K / Errington, Timothy M / Schiavone, Sarah R / Bottesini, Julia / Thorn, Felix Singleton / Vazire, Simine / Esterling, Kevin M / Nosek, Brian A

    Nature human behaviour

    2021  Volume 5, Issue 8, Page(s) 990–997

    Abstract: In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research ... ...

    Abstract In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. Here, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria (mean difference 0.46, scale range -4 to +4) with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to sizeable improvements in rigour of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
    MeSH term(s) Data Analysis ; Humans ; Neurosciences ; Peer Review, Research ; Psychology ; Registries ; Research/standards ; Research Design/standards ; Research Report/standards
    Language English
    Publishing date 2021-06-24
    Publishing country England
    Document type Journal Article ; Observational Study ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ISSN 2397-3374
    ISSN (online) 2397-3374
    DOI 10.1038/s41562-021-01142-4
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Investigating the replicability of preclinical cancer biology

    Timothy M Errington / Maya Mathur / Courtney K Soderberg / Alexandria Denis / Nicole Perfito / Elizabeth Iorns / Brian A Nosek

    eLife, Vol

    2021  Volume 10

    Abstract: Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set ... ...

    Abstract Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary – the replication was either a success or a failure – and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.
    Keywords Reproducibility Project: Cancer Biology ; replication ; reproducibility ; meta-analysis ; transparency ; reproducibility in cancer biology ; Medicine ; R ; Science ; Q ; Biology (General) ; QH301-705.5
    Subject code 150
    Language English
    Publishing date 2021-12-01T00:00:00Z
    Publisher eLife Sciences Publications Ltd
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Article ; Online: A randomized trial of a lab-embedded discourse intervention to improve research ethics.

    Plemmons, Dena K / Baranski, Erica N / Harp, Kyle / Lo, David D / Soderberg, Courtney K / Errington, Timothy M / Nosek, Brian A / Esterling, Kevin M

    Proceedings of the National Academy of Sciences of the United States of America

    2020  Volume 117, Issue 3, Page(s) 1389–1394

    Abstract: We report a randomized trial of a research ethics training intervention designed to enhance ethics communication in university science and engineering laboratories, focusing specifically on authorship and data management. The intervention is a project- ... ...

    Abstract We report a randomized trial of a research ethics training intervention designed to enhance ethics communication in university science and engineering laboratories, focusing specifically on authorship and data management. The intervention is a project-based research ethics curriculum that was designed to enhance the ability of science and engineering research laboratory members to engage in reason giving and interpersonal communication necessary for ethical practice. The randomized trial was fielded in active faculty-led laboratories at two US research-intensive institutions. Here, we show that laboratory members perceived improvements in the quality of discourse on research ethics within their laboratories and enhanced awareness of the relevance and reasons for that discourse for their work as measured by a survey administered over 4 mo after the intervention. This training represents a paradigm shift compared with more typical module-based or classroom ethics instruction that is divorced from the everyday workflow and practices within laboratories and is designed to cultivate a campus culture of ethical science and engineering research in the very work settings where laboratory members interact.
    Language English
    Publishing date 2020-01-09
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.1917848117
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Ensuring the quality and specificity of preregistrations.

    Bakker, Marjan / Veldkamp, Coosje L S / van Assen, Marcel A L M / Crompvoets, Elise A V / Ong, How Hwee / Nosek, Brian A / Soderberg, Courtney K / Mellor, David / Wicherts, Jelte M

    PLoS biology

    2020  Volume 18, Issue 12, Page(s) e3000937

    Abstract: Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical ... ...

    Abstract Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
    MeSH term(s) Data Collection/methods ; Data Collection/standards ; Data Collection/trends ; Humans ; Quality Control ; Registries/statistics & numerical data ; Research Design/statistics & numerical data ; Research Design/trends
    Language English
    Publishing date 2020-12-09
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2126776-5
    ISSN 1545-7885 ; 1544-9173
    ISSN (online) 1545-7885
    ISSN 1544-9173
    DOI 10.1371/journal.pbio.3000937
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Ensuring the quality and specificity of preregistrations.

    Marjan Bakker / Coosje L S Veldkamp / Marcel A L M van Assen / Elise A V Crompvoets / How Hwee Ong / Brian A Nosek / Courtney K Soderberg / David Mellor / Jelte M Wicherts

    PLoS Biology, Vol 18, Iss 12, p e

    2020  Volume 3000937

    Abstract: Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical ... ...

    Abstract Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.
    Keywords Biology (General) ; QH301-705.5
    Subject code 306
    Language English
    Publishing date 2020-12-01T00:00:00Z
    Publisher Public Library of Science (PLoS)
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Article ; Online: The effects of psychological distance on abstraction: Two meta-analyses.

    Soderberg, Courtney K / Callahan, Shannon P / Kochersberger, Annie O / Amit, Elinor / Ledgerwood, Alison

    Psychological bulletin

    2015  Volume 141, Issue 3, Page(s) 525–548

    Abstract: Psychological distance and abstraction both represent key variables of considerable interest to researchers across cognitive, social, and developmental psychology. Moreover, largely inspired by construal level theory, numerous experiments across multiple ...

    Abstract Psychological distance and abstraction both represent key variables of considerable interest to researchers across cognitive, social, and developmental psychology. Moreover, largely inspired by construal level theory, numerous experiments across multiple fields have now connected these 2 constructs, examining how psychological distance affects the level of abstraction at which people mentally represent the world around them. The time is clearly ripe for a quantitative synthesis to shed light on the relation between these constructs and investigate potential moderators. To this end, we conducted 2 meta-analyses of research examining the effects of psychological distance on abstraction and its downstream consequences. Across 106 papers containing a total of 267 experiments, our results showed a reliable and medium-sized effect of psychological distance on both level of abstraction in mental representation and the downstream consequences of abstraction. Importantly, these effects replicate across time, researchers, and settings. Our analyses also identified several key moderators, including the size of the difference in distance between 2 levels of a temporal distance manipulation and the dependent variable's capacity to tap processing of both abstract and concrete features (rather than only one or the other). We discuss theoretical and methodological implications, and highlight promising avenues for future research.
    MeSH term(s) Concept Formation ; Distance Perception ; Humans ; Psychological Theory ; Social Distance ; Time Perception
    Language English
    Publishing date 2015-05
    Publishing country United States
    Document type Journal Article ; Meta-Analysis ; Review
    ZDB-ID 1321-3
    ISSN 1939-1455 ; 0033-2909
    ISSN (online) 1939-1455
    ISSN 0033-2909
    DOI 10.1037/bul0000005
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top