LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 245

Search options

  1. Article ; Online: Why Stating Hypotheses in Grant Applications Is Unnecessary.

    Hernán, Miguel A / Greenland, Sander

    JAMA

    2024  Volume 331, Issue 4, Page(s) 285–286

    MeSH term(s) Writing ; Financing, Organized/methods ; Financing, Organized/standards ; Research Support as Topic/methods ; Research Support as Topic/standards
    Language English
    Publishing date 2024-01-30
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2958-0
    ISSN 1538-3598 ; 0254-9077 ; 0002-9955 ; 0098-7484
    ISSN (online) 1538-3598
    ISSN 0254-9077 ; 0002-9955 ; 0098-7484
    DOI 10.1001/jama.2023.27163
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Connecting Simple and Precise P-values to Complex and Ambiguous Realities

    Greenland, Sander

    2023  

    Abstract: ... with comments in Scandinavian Journal of Statistics 2023, issue 3. Main article: Greenland, S. (2023 ...

    Abstract Mathematics is a limited component of solutions to real-world problems, as it expresses only what is expected to be true if all our assumptions are correct, including implicit assumptions that are omnipresent and often incorrect. Statistical methods are rife with implicit assumptions whose violation can be life-threatening when results from them are used to set policy. Among them are that there is human equipoise or unbiasedness in data generation, management, analysis, and reporting. These assumptions correspond to levels of cooperation, competence, neutrality, and integrity that are absent more often than we would like to believe. Given this harsh reality, we should ask what meaning, if any, we can assign to the P-values, 'statistical significance' declarations, 'confidence' intervals, and posterior probabilities that are used to decide what and how to present (or spin) discussions of analyzed data. By themselves, P-values and CI do not test any hypothesis, nor do they measure the significance of results or the confidence we should have in them. The sense otherwise is an ongoing cultural error perpetuated by large segments of the statistical and research community via misleading terminology. So-called 'inferential' statistics can only become contextually interpretable when derived explicitly from causal stories about the real data generator (such as randomization), and can only become reliable when those stories are based on valid and public documentation of the physical mechanisms that generated the data. Absent these assurances, traditional interpretations of statistical results become pernicious fictions that need to be replaced by far more circumspect descriptions of data and model relations.

    Comment: 26 pages. Appears with comments in Scandinavian Journal of Statistics 2023, issue 3. Main article: Greenland, S. (2023). Divergence vs. decision P-values: A distinction worth making in theory and keeping in practice. Scandinavian Journal of Statistics, 50, 1-35, corrected version at arXiv:2301.02478
    Keywords Statistics - Methodology ; Quantitative Biology - Quantitative Methods ; 62A01 ; 62B05 ; 62P10 ; 62R30 ; 92B15 ; 97K40 ; 97K70 ; 97K80 ; E.4 ; F.4 ; G.3
    Subject code 310
    Publishing date 2023-04-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Divergence vs. Decision P-values

    Greenland, Sander

    A Distinction Worth Making in Theory and Keeping in Practice

    2023  

    Abstract: There are two distinct definitions of 'P-value' for evaluating a proposed hypothesis or model for the process generating an observed dataset. The original definition starts with a measure of the divergence of the dataset from what was expected under the ... ...

    Abstract There are two distinct definitions of 'P-value' for evaluating a proposed hypothesis or model for the process generating an observed dataset. The original definition starts with a measure of the divergence of the dataset from what was expected under the model, such as a sum of squares or a deviance statistic. A P-value is then the ordinal location of the measure in a reference distribution computed from the model and the data, and is treated as a unit-scaled index of compatibility between the data and the model. In the other definition, a P-value is a random variable on the unit interval whose realizations can be compared to a cutoff alpha to generate a decision rule with known error rates under the model and specific alternatives. It is commonly assumed that realizations of such decision P-values always correspond to divergence P-values. But this need not be so: Decision P-values can violate intuitive single-sample coherence criteria where divergence P-values do not. It is thus argued that divergence and decision P-values should be carefully distinguished in teaching, and that divergence P-values are the relevant choice when the analysis goal is to summarize evidence rather than implement a decision rule.

    Comment: 49 pages. Scandinavian Journal of Statistics 2023, issue 1, with discussion and rejoinder in issue 3
    Keywords Statistics - Other Statistics ; Mathematics - Statistics Theory ; Quantitative Biology - Quantitative Methods ; 62A01 ; 62B05 ; 62P10 ; 62R30 ; 92B15 ; 97K40 ; 97K70 ; 97K80 ; E.4 ; F.4 ; G.3
    Subject code 310
    Publishing date 2023-01-06
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Noncollapsibility, confounding, and sparse-data bias. Part 2: What should researchers make of persistent controversies about the odds ratio?

    Greenland, Sander

    Journal of clinical epidemiology

    2021  Volume 139, Page(s) 264–268

    Abstract: A previous note illustrated how the odds of an outcome have an undesirable property for risk summarization and communication: Noncollapsibility, defined as a failure of a group measure to represent a simple average of the measure over individuals or ... ...

    Abstract A previous note illustrated how the odds of an outcome have an undesirable property for risk summarization and communication: Noncollapsibility, defined as a failure of a group measure to represent a simple average of the measure over individuals or subgroups. The present sequel discusses how odds ratios amplify odds noncollapsibility and provides a basic numeric illustration of how noncollapsibility differs from confounding of effects (with which it is often confused). It also draws a connection of noncollapsibility to sparse-data bias in logistic, log-linear, and proportional-hazards regression.
    MeSH term(s) Biomedical Research/standards ; Biomedical Research/statistics & numerical data ; Confounding Factors, Epidemiologic ; Data Accuracy ; Humans ; Logistic Models ; Odds Ratio ; Publication Bias/statistics & numerical data ; Research Design/standards ; Research Design/statistics & numerical data ; Research Personnel/psychology
    Language English
    Publishing date 2021-06-11
    Publishing country United States
    Document type Journal Article
    ZDB-ID 639306-8
    ISSN 1878-5921 ; 0895-4356
    ISSN (online) 1878-5921
    ISSN 0895-4356
    DOI 10.1016/j.jclinepi.2021.06.004
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Noncollapsibility, confounding, and sparse-data bias. Part 1: The oddities of odds.

    Greenland, Sander

    Journal of clinical epidemiology

    2021  Volume 138, Page(s) 178–181

    Abstract: To prevent statistical misinterpretations, it has long been advised to focus on estimation instead of statistical testing. This sound advice brings with it the need to choose the outcome and effect measures on which to focus. Measures based on odds or ... ...

    Abstract To prevent statistical misinterpretations, it has long been advised to focus on estimation instead of statistical testing. This sound advice brings with it the need to choose the outcome and effect measures on which to focus. Measures based on odds or their logarithms have often been promoted due to their pleasing statistical properties, but have an undesirable property for risk summarization and communication: Noncollapsibility, defined as a failure of the measure when taken on a group to equal a simple average of the measure when taken on the group's members or subgroups. The present note illustrates this problem with a basic numeric example involving the odds, which is not collapsible when the odds vary across individuals and are not low in all subgroups. Its sequel will illustrate how this problem is amplified in odds ratios and logistic regression.
    MeSH term(s) Biomedical Research/statistics & numerical data ; Confounding Factors, Epidemiologic ; Data Accuracy ; Humans ; Logistic Models ; Odds Ratio ; Publication Bias/statistics & numerical data ; Research Design/statistics & numerical data
    Language English
    Publishing date 2021-06-11
    Publishing country United States
    Document type Journal Article
    ZDB-ID 639306-8
    ISSN 1878-5921 ; 0895-4356
    ISSN (online) 1878-5921
    ISSN 0895-4356
    DOI 10.1016/j.jclinepi.2021.06.007
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Invited Commentary: Dealing With the Inevitable Deficiencies of Bias Analysis-and All Analyses.

    Greenland, Sander

    American journal of epidemiology

    2021  Volume 190, Issue 8, Page(s) 1617–1621

    Abstract: Lash et al. (Am J Epidemiol. 2021;190(8):1604-1612) have presented detailed critiques of 3 bias analyses that they identify as "suboptimal." This identification raises the question of what "optimal" means for bias analysis, because it is practically ... ...

    Abstract Lash et al. (Am J Epidemiol. 2021;190(8):1604-1612) have presented detailed critiques of 3 bias analyses that they identify as "suboptimal." This identification raises the question of what "optimal" means for bias analysis, because it is practically impossible to do statistically optimal analyses of typical population studies-with or without bias analysis. At best the analysis can only attempt to satisfy practice guidelines and account for available information both within and outside the study. One should not expect a full accounting for all sources of uncertainty; hence, interval estimates and distributions for causal effects should never be treated as valid uncertainty assessments-they are instead only example analyses that follow from collections of often questionable assumptions. These observations reinforce those of Lash et al. and point to the need for more development of methods for judging bias-parameter distributions and utilization of available information.
    MeSH term(s) Bias ; Causality ; Humans ; Research Design
    Language English
    Publishing date 2021-03-20
    Publishing country United States
    Document type Journal Article ; Comment
    ZDB-ID 2937-3
    ISSN 1476-6256 ; 0002-9262
    ISSN (online) 1476-6256
    ISSN 0002-9262
    DOI 10.1093/aje/kwab069
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Commentary: An argument against E-values for assessing the plausibility that an association could be explained away by residual confounding.

    Greenland, Sander

    International journal of epidemiology

    2020  Volume 49, Issue 5, Page(s) 1501–1503

    MeSH term(s) Dissent and Disputes ; Humans
    Language English
    Publishing date 2020-08-17
    Publishing country England
    Document type Journal Article ; Comment
    ZDB-ID 187909-1
    ISSN 1464-3685 ; 0300-5771
    ISSN (online) 1464-3685
    ISSN 0300-5771
    DOI 10.1093/ije/dyaa095
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Analysis goals, error-cost sensitivity, and analysis hacking: Essential considerations in hypothesis testing and multiple comparisons.

    Greenland, Sander

    Paediatric and perinatal epidemiology

    2020  Volume 35, Issue 1, Page(s) 8–23

    Abstract: The "replication crisis" has been attributed to perverse incentives that lead to selective reporting and misinterpretations of P-values and confidence intervals. A crude fix offered for this problem is to lower testing cut-offs (α levels), either ... ...

    Abstract The "replication crisis" has been attributed to perverse incentives that lead to selective reporting and misinterpretations of P-values and confidence intervals. A crude fix offered for this problem is to lower testing cut-offs (α levels), either directly or in the form of null-biased multiple comparisons procedures such as naïve Bonferroni adjustments. Methodologists and statisticians have expressed positions that range from condemning all such procedures to demanding their application in almost all analyses. Navigating between these unjustifiable extremes requires defining analysis goals precisely enough to separate inappropriate from appropriate adjustments. To meet this need, I here review issues arising in single-parameter inference (such as error costs and loss functions) that are often skipped in basic statistics, yet are crucial to understanding controversies in testing and multiple comparisons. I also review considerations that should be made when examining arguments for and against modifications of decision cut-offs and adjustments for multiple comparisons. The goal is to provide researchers a better understanding of what is assumed by each side and to enable recognition of hidden assumptions. Basic issues of goal specification and error costs are illustrated with simple fixed cut-off hypothesis testing scenarios. These illustrations show how adjustment choices are extremely sensitive to implicit decision costs, making it inevitable that different stakeholders will vehemently disagree about what is necessary or appropriate. Because decisions cannot be justified without explicit costs, resolution of inference controversies is impossible without recognising this sensitivity. Pre-analysis statements of funding, scientific goals, and analysis plans can help counter demands for inappropriate adjustments, and can provide guidance as to what adjustments are advisable. Hierarchical (multilevel) regression methods (including Bayesian, semi-Bayes, and empirical-Bayes methods) provide preferable alternatives to conventional adjustments, insofar as they facilitate use of background information in the analysis model, and thus can provide better-informed estimates on which to base inferences and decisions.
    MeSH term(s) Bayes Theorem ; Goals ; Humans ; Research Design
    Language English
    Publishing date 2020-12-02
    Publishing country England
    Document type Journal Article
    ZDB-ID 639089-4
    ISSN 1365-3016 ; 0269-5022 ; 1353-663X
    ISSN (online) 1365-3016
    ISSN 0269-5022 ; 1353-663X
    DOI 10.1111/ppe.12711
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Causal Directed Acyclic Graphs.

    Lipsky, Ari M / Greenland, Sander

    JAMA

    2022  Volume 327, Issue 11, Page(s) 1083–1084

    MeSH term(s) Causality ; Confounding Factors, Epidemiologic
    Language English
    Publishing date 2022-03-16
    Publishing country United States
    Document type Journal Article ; Comment
    ZDB-ID 2958-0
    ISSN 1538-3598 ; 0254-9077 ; 0002-9955 ; 0098-7484
    ISSN (online) 1538-3598
    ISSN 0254-9077 ; 0002-9955 ; 0098-7484
    DOI 10.1001/jama.2022.1816
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Rewriting results in the language of compatibility.

    Amrhein, Valentin / Greenland, Sander

    Trends in ecology & evolution

    2022  Volume 37, Issue 7, Page(s) 567–568

    Language English
    Publishing date 2022-02-25
    Publishing country England
    Document type Letter
    ZDB-ID 284965-3
    ISSN 1872-8383 ; 0169-5347
    ISSN (online) 1872-8383
    ISSN 0169-5347
    DOI 10.1016/j.tree.2022.02.001
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top