LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 36

Search options

  1. Article ; Online: Emotion is perceived accurately from isolated body parts, especially hands.

    Blythe, Ellen / Garrido, Lúcia / Longo, Matthew R

    Cognition

    2022  Volume 230, Page(s) 105260

    Abstract: Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the ... ...

    Abstract Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
    MeSH term(s) Humans ; Human Body ; Emotions ; Recognition, Psychology ; Cues ; Hand ; Facial Expression
    Language English
    Publishing date 2022-09-01
    Publishing country Netherlands
    Document type Journal Article
    ZDB-ID 1499940-7
    ISSN 1873-7838 ; 0010-0277
    ISSN (online) 1873-7838
    ISSN 0010-0277
    DOI 10.1016/j.cognition.2022.105260
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: The development of upright face perception depends on evolved orientation-specific mechanisms and experience.

    Duchaine, Brad / Rezlescu, Constantin / Garrido, Lúcia / Zhang, Yiyuan / Braga, Maira V / Susilo, Tirta

    iScience

    2023  Volume 26, Issue 10, Page(s) 107763

    Abstract: Here we examine whether our impressive ability to perceive upright faces arises from evolved orientation-specific mechanisms, our extensive experience with upright faces, or both factors. To do so, we tested Claudio, a man with a congenital joint ... ...

    Abstract Here we examine whether our impressive ability to perceive upright faces arises from evolved orientation-specific mechanisms, our extensive experience with upright faces, or both factors. To do so, we tested Claudio, a man with a congenital joint disorder causing his head to be rotated back so that it is positioned between his shoulder blades. As a result, Claudio has seen more faces reversed in orientation to his own face than matched to it. Controls exhibited large inversion effects on all tasks, but Claudio performed similarly with upright and inverted faces in both detection and identity-matching tasks, indicating these abilities are the product of evolved mechanisms and experience. In contrast, he showed clear upright superiority when detecting "Thatcherized" faces (faces with vertically flipped features), suggesting experience plays a greater role in this judgment. Together, these findings indicate that both evolved orientation-specific mechanisms and experience contribute to our proficiency with upright faces.
    Language English
    Publishing date 2023-09-22
    Publishing country United States
    Document type Journal Article
    ISSN 2589-0042
    ISSN (online) 2589-0042
    DOI 10.1016/j.isci.2023.107763
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: The role of stimulus-based cues and conceptual information in processing facial expressions of emotion.

    Murray, Thomas / O'Brien, Justin / Sagiv, Noam / Garrido, Lúcia

    Cortex; a journal devoted to the study of the nervous system and behavior

    2021  Volume 144, Page(s) 109–132

    Abstract: Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity ... ...

    Abstract Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
    MeSH term(s) Brain ; Brain Mapping ; Cues ; Emotions ; Facial Expression ; Humans ; Magnetic Resonance Imaging
    Language English
    Publishing date 2021-09-24
    Publishing country Italy
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 280622-8
    ISSN 1973-8102 ; 0010-9452
    ISSN (online) 1973-8102
    ISSN 0010-9452
    DOI 10.1016/j.cortex.2021.08.007
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: FFA and OFA Encode Distinct Types of Face Identity Information.

    Tsantani, Maria / Kriegeskorte, Nikolaus / Storrs, Katherine / Williams, Adrian Lloyd / McGettigan, Carolyn / Garrido, Lúcia

    The Journal of neuroscience : the official journal of the Society for Neuroscience

    2021  Volume 41, Issue 9, Page(s) 1952–1969

    Abstract: Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face- ... ...

    Abstract Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.
    MeSH term(s) Brain/physiology ; Brain Mapping/methods ; Facial Recognition/physiology ; Female ; Humans ; Image Processing, Computer-Assisted ; Magnetic Resonance Imaging ; Male ; Models, Neurological
    Language English
    Publishing date 2021-01-15
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 604637-x
    ISSN 1529-2401 ; 0270-6474
    ISSN (online) 1529-2401
    ISSN 0270-6474
    DOI 10.1523/JNEUROSCI.1449-20.2020
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: How many voices did you hear? Natural variability disrupts identity perception from unfamiliar voices.

    Lavan, Nadine / Burston, Luke F K / Garrido, Lúcia

    British journal of psychology (London, England : 1953)

    2018  Volume 110, Issue 3, Page(s) 576–593

    Abstract: Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within-person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ' ... ...

    Abstract Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within-person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only 'tell people apart' (perceiving exemplars from two different speakers as separate identities) but also 'tell people together' (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within-person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in 'telling people together'. Our study highlights within-person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re-evaluation of theoretical models to account for natural variability during identity perception.
    MeSH term(s) Adolescent ; Auditory Perception/physiology ; Female ; Humans ; Individuality ; Male ; Speech Perception ; Voice
    Language English
    Publishing date 2018-09-16
    Publishing country England
    Document type Journal Article ; Randomized Controlled Trial
    ZDB-ID 220659-6
    ISSN 2044-8295
    ISSN (online) 2044-8295
    DOI 10.1111/bjop.12348
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Association vs dissociation and setting appropriate criteria for object agnosia.

    Garrido, Lúcia / Duchaine, Bradley / DeGutis, Joseph

    Cognitive neuropsychology

    2018  Volume 35, Issue 1-2, Page(s) 55–58

    MeSH term(s) Agnosia ; Humans ; Prosopagnosia ; Visual Perception
    Language English
    Publishing date 2018-05-04
    Publishing country England
    Document type Journal Article ; Comment
    ZDB-ID 226406-7
    ISSN 1464-0627 ; 0264-3294
    ISSN (online) 1464-0627
    ISSN 0264-3294
    DOI 10.1080/02643294.2018.1431875
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Transcriptome Analysis and Intraspecific Variation in Spanish Fir (

    Ortigosa, Francisco / Ávila, Concepción / Rubio, Lourdes / Álvarez-Garrido, Lucía / Carreira, José A / Cañas, Rafael A / Cánovas, Francisco M

    International journal of molecular sciences

    2022  Volume 23, Issue 16

    Abstract: Spanish fir ( ...

    Abstract Spanish fir (
    MeSH term(s) Abies/genetics ; Climate Change ; Gene Expression Profiling ; Nitrogen/metabolism ; Transcriptome/genetics ; Trees/genetics
    Chemical Substances Nitrogen (N762921K75)
    Language English
    Publishing date 2022-08-19
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2019364-6
    ISSN 1422-0067 ; 1422-0067 ; 1661-6596
    ISSN (online) 1422-0067
    ISSN 1422-0067 ; 1661-6596
    DOI 10.3390/ijms23169351
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Right laparoscopic pudendal release + neurostimulator prosthesis (LION procedure) in pudendal neuralgia.

    Moncada, Enrique / de San Ildefonso, Alberto / Flores, Erene / Garrido, Lucia / Cano-Valderrama, Oscar / Vigorita, Vincenzo / Sánchez-Santos, Raquel

    Colorectal disease : the official journal of the Association of Coloproctology of Great Britain and Ireland

    2022  Volume 24, Issue 10, Page(s) 1243–1244

    Abstract: Aim: Pudendal neuralgia is a highly disabling entity with complex diagnostic and controversial treatment results. Surgical neurolysis has been shown to be the most effective treatment. Sacral root neurostimulation or posterior tibial nerve stimulation ... ...

    Abstract Aim: Pudendal neuralgia is a highly disabling entity with complex diagnostic and controversial treatment results. Surgical neurolysis has been shown to be the most effective treatment. Sacral root neurostimulation or posterior tibial nerve stimulation are used to rescue patients who either have not responded to surgery or have worsened after an initial improvement.
    Methods: Given the excellent visualization of the pudendal nerve during laparoscopic pudendal release, we propose to combine this procedure with neurostimulation, taking advantage of the possibility of in situ placement of the electrode. The abdominal cavity is accessed laparoscopically through four ports, and after identifying and releasing the pudendal nerve a neurostimulation electrode is placed next to the nerve and is connected to a generator located in a subcutaneous pocket.
    Results: This procedure has been performed in one patient with a satisfactory result.
    Conclusions: Laparoscopic pudendal release with neurostimulator prosthesis is an experimental technique that can be promising for the treatment of pudendal neuralgia.
    MeSH term(s) Humans ; Pudendal Neuralgia/etiology ; Pudendal Neuralgia/surgery ; Pudendal Nerve/surgery ; Laparoscopy ; Treatment Outcome ; Electrodes, Implanted
    Language English
    Publishing date 2022-06-27
    Publishing country England
    Document type Journal Article
    ZDB-ID 1440017-0
    ISSN 1463-1318 ; 1462-8910
    ISSN (online) 1463-1318
    ISSN 1462-8910
    DOI 10.1111/codi.16190
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Diffuse calcificated infiltrations of the intrahepatic bile duct: Metastasis of mucinous adenocarcinoma.

    Guarner, Pol / Garrido, Lucía / Momblán, Dulce / M Lacy, Antonio

    Cirugia espanola

    2019  Volume 98, Issue 2, Page(s) 98

    Title translation Infiltraciones calcificadas difusas de la vía biliar intrahepática: metástasis de adenocarcinoma mucinoso.
    MeSH term(s) Adenocarcinoma, Mucinous ; Bile Duct Neoplasms ; Bile Ducts, Intrahepatic/diagnostic imaging ; Bile Ducts, Intrahepatic/pathology ; Calcinosis ; Colonic Neoplasms/pathology ; Disease Progression ; Female ; Humans ; Middle Aged
    Language Spanish
    Publishing date 2019-05-09
    Publishing country Spain
    Document type Case Reports ; Journal Article
    ISSN 2173-5077
    ISSN (online) 2173-5077
    DOI 10.1016/j.ciresp.2019.03.019
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Faces and voices in the brain: A modality-general person-identity representation in superior temporal sulcus.

    Tsantani, Maria / Kriegeskorte, Nikolaus / McGettigan, Carolyn / Garrido, Lúcia

    NeuroImage

    2019  Volume 201, Page(s) 116004

    Abstract: Face-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by ... ...

    Abstract Face-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face videos and voice recordings of the same identity. Models of face and voice integration suggest that such representations could exist in multimodal brain regions, and in unimodal regions via direct coupling between face- and voice-selective regions. Therefore, in this study we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in face-selective, voice-selective, and person-selective multimodal brain regions. We used representational similarity analysis to (1) compare representational geometries (i.e. representational dissimilarity matrices) of face- and voice-elicited identities, and to (2) investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We did not find any evidence of similar representational geometries across modalities in any of our regions of interest. However, our results showed that pattern discriminants that were trained to discriminate pairs of identities from their faces could also discriminate the respective voices (and vice-versa) in the right posterior superior temporal sulcus (rpSTS). Our findings suggest that the rpSTS is a person-selective multimodal region that shows a modality-general person-identity representation and integrates face and voice identity information.
    MeSH term(s) Adult ; Auditory Perception/physiology ; Facial Recognition/physiology ; Female ; Humans ; Magnetic Resonance Imaging ; Male ; Recognition, Psychology/physiology ; Temporal Lobe/physiology ; Voice ; Young Adult
    Language English
    Publishing date 2019-07-09
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1147767-2
    ISSN 1095-9572 ; 1053-8119
    ISSN (online) 1095-9572
    ISSN 1053-8119
    DOI 10.1016/j.neuroimage.2019.07.017
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top