LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Suchergebnis

Treffer 1 - 8 von insgesamt 8

Suchoptionen

  1. Artikel ; Online: Latent neural dynamics encode temporal context in speech.

    Stephen, Emily P / Li, Yuanning / Metzger, Sean / Oganian, Yulia / Chang, Edward F

    Hearing research

    2023  Band 437, Seite(n) 108838

    Abstract: Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset ...

    Abstract Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset of a sentence after a silent period or the onset of the vowel in each syllable. Here, we used a group reduced rank regression model to show that distributed cortical responses support a low-dimensional latent state representation of temporal context in speech. The timing cues each capture more unique variance than all other phonetic features and exhibit rotational or cyclical dynamics in latent space from activity that is widespread over the superior temporal gyrus. We propose that these spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations.
    Mesh-Begriff(e) Humans ; Speech/physiology ; Speech Perception/physiology ; Temporal Lobe/physiology ; Auditory Cortex/physiology ; Phonetics ; Acoustic Stimulation
    Sprache Englisch
    Erscheinungsdatum 2023-07-04
    Erscheinungsland Netherlands
    Dokumenttyp Journal Article ; Review ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't
    ZDB-ID 282629-x
    ISSN 1878-5891 ; 0378-5955
    ISSN (online) 1878-5891
    ISSN 0378-5955
    DOI 10.1016/j.heares.2023.108838
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Artikel ; Online: A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages.

    Silva, Alexander B / Liu, Jessie R / Metzger, Sean L / Bhaya-Grossman, Ilina / Dougherty, Maximilian E / Seaton, Margaret P / Littlejohn, Kaylo T / Tu-Chan, Adelyn / Ganguly, Karunesh / Moses, David A / Chang, Edward F

    Nature biomedical engineering

    2024  

    Abstract: Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged ... ...

    Abstract Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.
    Sprache Englisch
    Erscheinungsdatum 2024-05-20
    Erscheinungsland England
    Dokumenttyp Journal Article
    ISSN 2157-846X
    ISSN (online) 2157-846X
    DOI 10.1038/s41551-024-01207-5
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Buch ; Online: Embodying Asian/American sexualities

    Maséquesmay, Gina / Metzger, Sean

    2009  

    Abstract: Embodying Asian/American Sexualities is an accessible reader designed for use in undergraduate and graduate American studies, ethnic studies, gender and sexuality studies, and performance studies classes as well as for a general public interested in ... ...

    Verfasserangabe edited by Gina Masequesmay and Sean Metzger
    Abstract Embodying Asian/American Sexualities is an accessible reader designed for use in undergraduate and graduate American studies, ethnic studies, gender and sexuality studies, and performance studies classes as well as for a general public interested in related issues. It contains both overviews of the field and scholarly interventions into a range of topics, including history, literature, performance, and sociology
    Schlagwörter Asian Americans/Attitudes ; Asian Americans/Race identity ; Asian Americans/Sexual behavior
    Sprache Englisch
    Umfang Online-Ressource (viii, 188 p.)
    Verlag Lexington Books
    Erscheinungsort Lanham, MD
    Dokumenttyp Buch ; Online
    Anmerkung Includes bibliographical references and index
    ISBN 0739129031 ; 0739133519 ; 1282493418 ; 1282494171 ; 9780739129036 ; 9780739129043 ; 9780739133514 ; 9781282494176 ; 9781282493414 ; 073912904X
    Datenquelle Katalog der Technische Informationsbibliothek Hannover

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: Evaluating Self-Supervised Pretraining Without Using Labels

    Reed, Colorado / Metzger, Sean / Srinivas, Aravind / Darrell, Trevor / Keutzer, Kurt

    2020  

    Abstract: A common practice in unsupervised representation learning is to use labeled data to evaluate the learned representations - oftentimes using the labels from the "unlabeled" training dataset. This supervised evaluation is then used to guide the training ... ...

    Abstract A common practice in unsupervised representation learning is to use labeled data to evaluate the learned representations - oftentimes using the labels from the "unlabeled" training dataset. This supervised evaluation is then used to guide the training process, e.g. to select augmentation policies. However, supervised evaluations may not be possible when labeled data is difficult to obtain (such as medical imaging) or ambiguous to label (such as fashion categorization). This raises the question: is it possible to evaluate unsupervised models without using labeled data? Furthermore, is it possible to use this evaluation to make decisions about the training process, such as which augmentation policies to use? In this work, we show that the simple self-supervised evaluation task of image rotation prediction is highly correlated with the supervised performance of standard visual recognition tasks and datasets (rank correlation > 0.94). We establish this correlation across hundreds of augmentation policies and training schedules and show how this evaluation criteria can be used to automatically select augmentation policies without using labels. Despite not using any labeled data, these policies perform comparably with policies that were determined using supervised downstream tasks. Importantly, this work explores the idea of using unsupervised evaluation criteria to help both researchers and practitioners make decisions when training without labeled data.
    Schlagwörter Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2020-09-16
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Artikel ; Online: A high-performance neuroprosthesis for speech decoding and avatar control.

    Metzger, Sean L / Littlejohn, Kaylo T / Silva, Alexander B / Moses, David A / Seaton, Margaret P / Wang, Ran / Dougherty, Maximilian E / Liu, Jessie R / Wu, Peter / Berger, Michael A / Zhuravleva, Inga / Tu-Chan, Adelyn / Ganguly, Karunesh / Anumanchipalli, Gopala K / Chang, Edward F

    Nature

    2023  Band 620, Heft 7976, Seite(n) 1037–1046

    Abstract: Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are ... ...

    Abstract Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive
    Mesh-Begriff(e) Humans ; Cerebral Cortex/physiology ; Cerebral Cortex/physiopathology ; Clinical Trials as Topic ; Communication ; Deep Learning ; Face ; Gestures ; Movement ; Neural Prostheses/standards ; Paralysis/physiopathology ; Paralysis/rehabilitation ; Speech ; Vocabulary ; Voice
    Sprache Englisch
    Erscheinungsdatum 2023-08-23
    Erscheinungsland England
    Dokumenttyp Journal Article
    ZDB-ID 120714-3
    ISSN 1476-4687 ; 0028-0836
    ISSN (online) 1476-4687
    ISSN 0028-0836
    DOI 10.1038/s41586-023-06443-4
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  6. Artikel ; Online: Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis.

    Metzger, Sean L / Liu, Jessie R / Moses, David A / Dougherty, Maximilian E / Seaton, Margaret P / Littlejohn, Kaylo T / Chartier, Josh / Anumanchipalli, Gopala K / Tu-Chan, Adelyn / Ganguly, Karunesh / Chang, Edward F

    Nature communications

    2022  Band 13, Heft 1, Seite(n) 6510

    Abstract: Neuroprostheses have the potential to restore communication to people who cannot speak or type due to paralysis. However, it is unclear if silent attempts to speak can be used to control a communication neuroprosthesis. Here, we translated direct ... ...

    Abstract Neuroprostheses have the potential to restore communication to people who cannot speak or type due to paralysis. However, it is unclear if silent attempts to speak can be used to control a communication neuroprosthesis. Here, we translated direct cortical signals in a clinical-trial participant (ClinicalTrials.gov; NCT03698149) with severe limb and vocal-tract paralysis into single letters to spell out full sentences in real time. We used deep-learning and language-modeling techniques to decode letter sequences as the participant attempted to silently spell using code words that represented the 26 English letters (e.g. "alpha" for "a"). We leveraged broad electrode coverage beyond speech-motor cortex to include supplemental control signals from hand cortex and complementary information from low- and high-frequency signal components to improve decoding accuracy. We decoded sentences using words from a 1,152-word vocabulary at a median character error rate of 6.13% and speed of 29.4 characters per minute. In offline simulations, we showed that our approach generalized to large vocabularies containing over 9,000 words (median character error rate of 8.23%). These results illustrate the clinical viability of a silently controlled speech neuroprosthesis to generate sentences from a large vocabulary through a spelling-based approach, complementing previous demonstrations of direct full-word decoding.
    Mesh-Begriff(e) Humans ; Speech ; Language ; Vocabulary ; Speech Perception ; Paralysis
    Sprache Englisch
    Erscheinungsdatum 2022-11-08
    Erscheinungsland England
    Dokumenttyp Journal Article
    ZDB-ID 2553671-0
    ISSN 2041-1723 ; 2041-1723
    ISSN (online) 2041-1723
    ISSN 2041-1723
    DOI 10.1038/s41467-022-33611-3
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  7. Artikel ; Online: High-density single-unit human cortical recordings using the Neuropixels probe.

    Chung, Jason E / Sellers, Kristin K / Leonard, Matthew K / Gwilliams, Laura / Xu, Duo / Dougherty, Maximilian E / Kharazia, Viktor / Metzger, Sean L / Welkenhuysen, Marleen / Dutta, Barundeb / Chang, Edward F

    Neuron

    2022  Band 110, Heft 15, Seite(n) 2409–2421.e3

    Abstract: The action potential is a fundamental unit of neural computation. Even though significant advances have been made in recording large numbers of individual neurons in animal models, translation of these methodologies to humans has been limited because of ... ...

    Abstract The action potential is a fundamental unit of neural computation. Even though significant advances have been made in recording large numbers of individual neurons in animal models, translation of these methodologies to humans has been limited because of clinical constraints and electrode reliability. Here, we present a reliable method for intraoperative recording of dozens of neurons in humans using the Neuropixels probe, yielding up to ∼100 simultaneously recorded single units. Most single units were active within 1 min of reaching target depth. The motion of the electrode array had a strong inverse correlation with yield, identifying a major challenge and opportunity to further increase the probe utility. Cell pairs active close in time were spatially closer in most recordings, demonstrating the power to resolve complex cortical dynamics. Altogether, this approach provides access to population single-unit activity across the depth of human neocortex at scales previously only accessible in animal models.
    Mesh-Begriff(e) Action Potentials/physiology ; Electrodes ; Electrodes, Implanted ; Humans ; Neocortex ; Neurons/physiology ; Reproducibility of Results
    Sprache Englisch
    Erscheinungsdatum 2022-06-08
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 808167-0
    ISSN 1097-4199 ; 0896-6273
    ISSN (online) 1097-4199
    ISSN 0896-6273
    DOI 10.1016/j.neuron.2022.05.007
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  8. Artikel ; Online: Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria.

    Moses, David A / Metzger, Sean L / Liu, Jessie R / Anumanchipalli, Gopala K / Makin, Joseph G / Sun, Pengfei F / Chartier, Josh / Dougherty, Maximilian E / Liu, Patricia M / Abrams, Gary M / Tu-Chan, Adelyn / Ganguly, Karunesh / Chang, Edward F

    The New England journal of medicine

    2021  Band 385, Heft 3, Seite(n) 217–227

    Abstract: Background: Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of ... ...

    Abstract Background: Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication.
    Methods: We implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech in a person with anarthria (the loss of the ability to articulate speech) and spastic quadriparesis caused by a brain-stem stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. We used deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity. We applied these computational models, as well as a natural-language model that yielded next-word probabilities given the preceding words in a sequence, to decode full sentences as the participant attempted to say them.
    Results: We decoded sentences from the participant's cortical activity in real time at a median rate of 15.2 words per minute, with a median word error rate of 25.6%. In post hoc analyses, we detected 98% of the attempts by the participant to produce individual words, and we classified words with 47.1% accuracy using cortical signals that were stable throughout the 81-week study period.
    Conclusions: In a person with anarthria and spastic quadriparesis caused by a brain-stem stroke, words and sentences were decoded directly from cortical activity during attempted speech with the use of deep-learning models and a natural-language model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149.).
    Mesh-Begriff(e) Adult ; Brain Stem Infarctions/complications ; Brain-Computer Interfaces ; Deep Learning ; Dysarthria/etiology ; Dysarthria/rehabilitation ; Electrocorticography ; Electrodes, Implanted ; Humans ; Male ; Natural Language Processing ; Neural Prostheses ; Quadriplegia/etiology ; Sensorimotor Cortex/physiology ; Speech
    Sprache Englisch
    Erscheinungsdatum 2021-07-14
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, N.I.H., Extramural ; Research Support, Non-U.S. Gov't
    ZDB-ID 207154-x
    ISSN 1533-4406 ; 0028-4793
    ISSN (online) 1533-4406
    ISSN 0028-4793
    DOI 10.1056/NEJMoa2027540
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang