LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 138

Search options

  1. Article ; Online: Norman Henry Anderson (1925-2022).

    Wixted, John T

    The American psychologist

    2023  Volume 79, Issue 1, Page(s) 153

    Abstract: This article memorializes Norman Henry Anderson (1925-2022), best known for his information integration theory (IIT). Norman Anderson's work was influential in its time, and his legacy endures. He was the recipient of the 1972 American Association for ... ...

    Abstract This article memorializes Norman Henry Anderson (1925-2022), best known for his information integration theory (IIT). Norman Anderson's work was influential in its time, and his legacy endures. He was the recipient of the 1972 American Association for the Advancement of Science Prize for Behavioral Science Research, and, as a tribute to his work, scholars in the field established a conference that continues to this day: the International Information Integration Theory/Functional Measurement Conference. Highlights of Anderson's career and professional contributions are noted. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
    MeSH term(s) Awards and Prizes ; Behavioral Research
    Language English
    Publishing date 2023-11-06
    Publishing country United States
    Document type Journal Article
    ZDB-ID 209464-2
    ISSN 1935-990X ; 0003-066X
    ISSN (online) 1935-990X
    ISSN 0003-066X
    DOI 10.1037/amp0001227
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: The enigma of forgetting.

    Wixted, John T

    Proceedings of the National Academy of Sciences of the United States of America

    2022  Volume 119, Issue 12, Page(s) e2201332119

    MeSH term(s) Cues ; Mental Recall
    Language English
    Publishing date 2022-03-15
    Publishing country United States
    Document type Journal Article ; Comment
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2201332119
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Absolute versus relative forgetting.

    Wixted, John T

    Journal of experimental psychology. Learning, memory, and cognition

    2022  

    Abstract: Slamecka and McElree (1983) and Rivera-Lares et al. (2022), like others before them, factorially manipulated the number of learning trials and the retention interval. The results revealed two unsurprising main effects: (a) the more study trials, the ... ...

    Abstract Slamecka and McElree (1983) and Rivera-Lares et al. (2022), like others before them, factorially manipulated the number of learning trials and the retention interval. The results revealed two unsurprising main effects: (a) the more study trials, the higher the initial degree of learning, and (b) the longer the retention interval, the more items were forgotten. However, across many experiments, the interaction was not significant, a finding that is often interpreted to mean that the degree of learning is independent of the absolute rate of forgetting (i.e., the absolute number of items forgotten per unit time). Yet there is considerable tension between that interpretation and the fact that forgetting has long been characterized by a power law, according to which the absolute rate of forgetting is not a particularly meaningful measure. When the power function is fit to the same data, the results show that a higher degree of learning results in a lower relative (i.e., proportional) rate of forgetting. This raises an interesting question: which of the two definitions of "forgetting rate" (absolute vs. relative) is theoretically relevant? Here, I make the case that it is the relative rate of forgetting. Theoretically, the explanation of why a higher degree of learning is associated with a lower relative rate of forgetting may be related to why, as observed by Jost (1897) long ago, the passage of time itself is associated with a lower relative rate of forgetting. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
    Language English
    Publishing date 2022-11-10
    Publishing country United States
    Document type Journal Article
    ZDB-ID 627313-0
    ISSN 1939-1285 ; 0278-7393
    ISSN (online) 1939-1285
    ISSN 0278-7393
    DOI 10.1037/xlm0001196
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Eyewitness memory is reliable, but the criminal justice system is not.

    Wixted, John T / Mickes, Laura

    Memory (Hove, England)

    2022  Volume 30, Issue 1, Page(s) 67–72

    Abstract: The reliability of any type of forensic evidence (e.g., forensic DNA) is assessed by testing its information value when it is not contaminated and is properly tested. Assessing the reliability of ... ...

    Abstract The reliability of any type of forensic evidence (e.g., forensic DNA) is assessed by testing its information value when it is not contaminated and is properly tested. Assessing the reliability of forensic
    MeSH term(s) Criminal Law/methods ; Humans ; Memory ; Mental Recall ; Reproducibility of Results
    Language English
    Publishing date 2022-03-24
    Publishing country England
    Document type Journal Article
    ZDB-ID 1147478-6
    ISSN 1464-0686 ; 0965-8211
    ISSN (online) 1464-0686
    ISSN 0965-8211
    DOI 10.1080/09658211.2021.1974485
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: The effects of filler similarity and lineup size on eyewitness identification.

    Shen, Kyros J / Huang, Jiaqi / Lam, Allan L / Wixted, John T

    Journal of experimental psychology. Learning, memory, and cognition

    2024  

    Abstract: A photo lineup, which is a cross between an old/new and a forced-choice recognition memory test, consists of one suspect, whose face was either seen before or not, and several physically similar fillers. First, the participant/witness must decide whether ...

    Abstract A photo lineup, which is a cross between an old/new and a forced-choice recognition memory test, consists of one suspect, whose face was either seen before or not, and several physically similar fillers. First, the participant/witness must decide whether the person who was previously seen is present (old/new) and then, if present, choose the previously seen target (forced choice). Competing signal-detection models of eyewitness identification performance make different predictions about how certain variables will affect a witness's ability to discriminate previously seen (guilty) suspects from new (innocent) suspects. One key variable is the similarity of the fillers to the suspect in the lineup, and another key variable is the size of the lineup (i.e., the number of fillers). Previous research investigating the role of filler similarity has supported one model, known as the Ensemble model, whereas previous research investigating the role of lineup size has supported a competing model, known as the Independent Observations model. We simultaneously manipulated these two variables (filler similarity and lineup size) and found a pattern that is not predicted by either model. When the fillers were highly similar to the suspect, increasing lineup size reduced discriminability, but when the fillers were dissimilar to the suspect, increasing lineup size enhanced discriminability. The results suggest that each additional filler adds noise to the decision-making process and that this noise factor is minimized by maximizing filler dissimilarity. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
    Language English
    Publishing date 2024-04-04
    Publishing country United States
    Document type Journal Article
    ZDB-ID 627313-0
    ISSN 1939-1285 ; 0278-7393
    ISSN (online) 1939-1285
    ISSN 0278-7393
    DOI 10.1037/xlm0001342
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: The forgotten history of signal detection theory.

    Wixted, John T

    Journal of experimental psychology. Learning, memory, and cognition

    2019  Volume 46, Issue 2, Page(s) 201–233

    Abstract: Signal detection theory is one of psychology's most well-known and influential theoretical frameworks. However, the conceptual hurdles that had to be overcome before the theory could finally emerge in its modern form in the early 1950s seem to have been ... ...

    Abstract Signal detection theory is one of psychology's most well-known and influential theoretical frameworks. However, the conceptual hurdles that had to be overcome before the theory could finally emerge in its modern form in the early 1950s seem to have been largely forgotten. Here, I trace the origins of signal detection theory, beginning with Fechner's (1860/1966)
    MeSH term(s) Consciousness ; History, 19th Century ; History, 20th Century ; Humans ; Psychological Theory ; Psychophysics/history ; ROC Curve ; Sensation ; Signal Detection, Psychological
    Language English
    Publishing date 2019-06-27
    Publishing country United States
    Document type Historical Article ; Journal Article
    ZDB-ID 627313-0
    ISSN 1939-1285 ; 0278-7393
    ISSN (online) 1939-1285
    ISSN 0278-7393
    DOI 10.1037/xlm0000732
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Time to exonerate eyewitness memory.

    Wixted, John T

    Forensic science international

    2018  Volume 292, Page(s) e13–e15

    Abstract: Understandably enough, most people are under the impression that eyewitness memory is unreliable. For example, research shows that memory is malleable, so much so that people can come to confidently remember traumatic events that never actually happened. ...

    Abstract Understandably enough, most people are under the impression that eyewitness memory is unreliable. For example, research shows that memory is malleable, so much so that people can come to confidently remember traumatic events that never actually happened. In addition, eyewitness misidentifications made with high confidence in a court of law are known to have played a role in more than 70% of the 358 wrongful convictions that have been overturned based on DNA evidence since 1989. However, recent research demonstrates that eyewitness confidence is highly indicative of accuracy on an initial, uncontaminated, properly administered photo lineup. In other words, low confidence indicates that the test result (i.e., the ID) is inconclusive, whereas high confidence indicates that the test result is far more conclusive. Critically, for the DNA exonerees who were misidentified by an eyewitness in a court of law, in every case where their initial confidence can be determined, the eyewitness appropriately expressed low confidence. For any other kind of evidence (e.g., DNA, fingerprints), an inconclusive test result like that would have been the end of it. By contrast, in the case of eyewitness evidence, investigators repeatedly tested (and therefore unwittingly contaminated) memory until a seemingly conclusive high-confidence ID could be presented to the jury. Blaming eyewitness memory for the failure of the criminal justice system to accept the inconclusive nature of the initial (uncontaminated) eyewitness evidence seems misguided. In addition to exonerating the innocent defendants who were wrongfully convicted, the time has come to exonerate eyewitness memory too.
    MeSH term(s) Criminal Law ; DNA Fingerprinting/legislation & jurisprudence ; Humans ; Mental Recall
    Language English
    Publishing date 2018-08-29
    Publishing country Ireland
    Document type Journal Article
    ZDB-ID 424042-x
    ISSN 1872-6283 ; 0379-0738
    ISSN (online) 1872-6283
    ISSN 0379-0738
    DOI 10.1016/j.forsciint.2018.08.018
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Theoretical false positive psychology.

    Wilson, Brent M / Harris, Christine R / Wixted, John T

    Psychonomic bulletin & review

    2022  Volume 29, Issue 5, Page(s) 1751–1775

    Abstract: A fundamental goal of scientific research is to generate true positives (i.e., authentic discoveries). Statistically, a true positive is a significant finding for which the underlying effect size (δ) is greater than 0, whereas a false positive is a ... ...

    Abstract A fundamental goal of scientific research is to generate true positives (i.e., authentic discoveries). Statistically, a true positive is a significant finding for which the underlying effect size (δ) is greater than 0, whereas a false positive is a significant finding for which δ equals 0. However, the null hypothesis of no difference (δ = 0) may never be strictly true because innumerable nuisance factors can introduce small effects for theoretically uninteresting reasons. If δ never equals zero, then with sufficient power, every experiment would yield a significant result. Yet running studies with higher power by increasing sample size (N) is one of the most widely agreed upon reforms to increase replicability. Moreover, and perhaps not surprisingly, the idea that psychology should attach greater value to small effect sizes is gaining currency. Increasing N without limit makes sense for purely measurement-focused research, where the magnitude of δ itself is of interest, but it makes less sense for theory-focused research, where the truth status of the theory under investigation is of interest. Increasing power to enhance replicability will increase true positives at the level of the effect size (statistical true positives) while increasing false positives at the level of theory (theoretical false positives). With too much power, the cumulative foundation of psychological science would consist largely of nuisance effects masquerading as theoretically important discoveries. Positive predictive value at the level of theory is maximized by using an optimal N, one that is neither too small nor too large.
    MeSH term(s) Humans ; Sample Size
    Language English
    Publishing date 2022-05-02
    Publishing country United States
    Document type Journal Article ; Review
    ZDB-ID 2031311-1
    ISSN 1531-5320 ; 1069-9384
    ISSN (online) 1531-5320
    ISSN 1069-9384
    DOI 10.3758/s13423-022-02098-w
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Discrete-state versus continuous models of the confidence-accuracy relationship in recognition memory.

    Delay, Christophe G / Wixted, John T

    Psychonomic bulletin & review

    2020  Volume 28, Issue 2, Page(s) 556–564

    Abstract: The relationship between confidence and accuracy in recognition memory is important in real-world settings (e.g., eyewitness identification) and is also important to understand at a theoretical level. Signal detection theory assumes that recognition ... ...

    Abstract The relationship between confidence and accuracy in recognition memory is important in real-world settings (e.g., eyewitness identification) and is also important to understand at a theoretical level. Signal detection theory assumes that recognition decisions are based on continuous underlying memory signals and therefore inherently predicts that the relationship between confidence and accuracy will be continuous. Almost invariably, the empirical data accord with this prediction. Threshold models instead assume that recognition decisions are based on discrete-state memory signals. As a result, these models do not inherently predict a continuous confidence-accuracy relationship. However, they can accommodate that result by adding hypothetical mapping relationships between discrete states and the confidence rating scale. These mapping relationships are thought to arise from a variety of factors, including demand characteristics (e.g., instructing participants to distribute their responses across the confidence scale). However, until such possibilities are experimentally investigated in the context of a recognition memory experiment, there is no sense in which threshold models adequately explain confidence ratings at a theoretical level. Here, we tested whether demand characteristics might account for the mapping relationships required by threshold models and found that confidence was continuously related to accuracy (almost identically so) both in the presence of strong experimenter demands and in their absence. We conclude that confidence ratings likely reflect the strength of a continuous underlying memory signal, not an attempt to use the confidence scale in a manner that accords with the perceived expectations of the experimenter.
    MeSH term(s) Adult ; Female ; Humans ; Male ; Metacognition/physiology ; Models, Psychological ; Recognition, Psychology/physiology ; Task Performance and Analysis ; Young Adult
    Language English
    Publishing date 2020-10-27
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2031311-1
    ISSN 1531-5320 ; 1069-9384
    ISSN (online) 1531-5320
    ISSN 1069-9384
    DOI 10.3758/s13423-020-01831-7
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Measuring memory is harder than you think: How to avoid problematic measurement practices in memory research.

    Brady, Timothy F / Robinson, Maria M / Williams, Jamal R / Wixted, John T

    Psychonomic bulletin & review

    2022  Volume 30, Issue 2, Page(s) 421–449

    Abstract: We argue that critical areas of memory research rely on problematic measurement practices and provide concrete suggestions to improve the situation. In particular, we highlight the prevalence of memory studies that use tasks (like the "old/new" task: " ... ...

    Abstract We argue that critical areas of memory research rely on problematic measurement practices and provide concrete suggestions to improve the situation. In particular, we highlight the prevalence of memory studies that use tasks (like the "old/new" task: "have you seen this item before? yes/no") where quantifying performance is deeply dependent on counterfactual reasoning that depends on the (unknowable) distribution of underlying memory signals. As a result of this difficulty, different literatures in memory research (e.g., visual working memory, eyewitness identification, picture memory, etc.) have settled on a variety of fundamentally different metrics to get performance measures from such tasks (e.g., A', corrected hit rate, percent correct, d', diagnosticity ratios, K values, etc.), even though these metrics make different, contradictory assumptions about the distribution of latent memory signals, and even though all of their assumptions are frequently incorrect. We suggest that in order for the psychology and neuroscience of memory to become a more cumulative, theory-driven science, more attention must be given to measurement issues. We make a concrete suggestion: The default memory task for those simply interested in performance should change from old/new ("did you see this item'?") to two-alternative forced-choice ("which of these two items did you see?"). In situations where old/new variants are preferred (e.g., eyewitness identification; theoretical investigations of the nature of memory signals), receiver operating characteristic (ROC) analysis should be performed rather than a binary old/new task.
    MeSH term(s) Humans ; Memory, Short-Term ; ROC Curve
    Language English
    Publishing date 2022-10-19
    Publishing country United States
    Document type Journal Article ; Review
    ZDB-ID 2031311-1
    ISSN 1531-5320 ; 1069-9384
    ISSN (online) 1531-5320
    ISSN 1069-9384
    DOI 10.3758/s13423-022-02179-w
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top