LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 215

Search options

  1. Article: Automatic learning mechanisms for flexible human locomotion.

    Rossi, Cristina / Leech, Kristan A / Roemmich, Ryan T / Bastian, Amy J

    bioRxiv : the preprint server for biology

    2023  

    Abstract: Movement flexibility and automaticity are necessary to successfully navigate different environments. When encountering difficult terrains such as a muddy trail, we can change how we step almost immediately so that we can continue walking. This ... ...

    Abstract Movement flexibility and automaticity are necessary to successfully navigate different environments. When encountering difficult terrains such as a muddy trail, we can change how we step almost immediately so that we can continue walking. This flexibility comes at a cost since we initially must pay deliberate attention to how we are moving. Gradually, after a few minutes on the trail, stepping becomes automatic so that we do not need to think about our movements. Canonical theory indicates that different adaptive motor learning mechanisms confer these essential properties to movement: explicit control confers flexibility, while forward model recalibration confers automaticity. Here we uncover a distinct mechanism of treadmill walking adaptation - an automatic stimulus-response mapping - that confers both properties to movement. The mechanism is flexible as it learns stepping patterns that can be rapidly changed to suit a range of treadmill configurations. It is also automatic as it can operate without deliberate control or explicit awareness by the participants. Our findings reveal a tandem architecture of forward model recalibration and automatic stimulus-response mapping mechanisms for walking, reconciling different findings of motor adaptation and perceptual realignment.
    Language English
    Publishing date 2023-09-25
    Publishing country United States
    Document type Preprint
    DOI 10.1101/2023.09.25.559267
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Two-dimensional video-based analysis of human gait using pose estimation.

    Stenum, Jan / Rossi, Cristina / Roemmich, Ryan T

    PLoS computational biology

    2021  Volume 17, Issue 4, Page(s) e1008935

    Abstract: Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose ... ...

    Abstract Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s-1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.
    MeSH term(s) Algorithms ; Biomechanical Phenomena ; Gait ; Humans ; Posture ; Videotape Recording
    Language English
    Publishing date 2021-04-23
    Publishing country United States
    Document type Journal Article ; Research Support, N.I.H., Extramural
    ZDB-ID 2193340-6
    ISSN 1553-7358 ; 1553-734X
    ISSN (online) 1553-7358
    ISSN 1553-734X
    DOI 10.1371/journal.pcbi.1008935
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Quantitative

    Çavuşoğlu, Mustafa / Pazahr, Shila / Ciritsis, Alexander P / Rossi, Cristina

    NMR in biomedicine

    2022  Volume 35, Issue 8, Page(s) e4733

    Abstract: Monitoring the tissue sodium content (TSC) in the intervertebral disk geometry noninvasively by MRI is a sensitive measure to estimate changes in the proteoglycan content of the intervertebral disk, which is a biomarker of degenerative disk disease (DDD) ...

    Abstract Monitoring the tissue sodium content (TSC) in the intervertebral disk geometry noninvasively by MRI is a sensitive measure to estimate changes in the proteoglycan content of the intervertebral disk, which is a biomarker of degenerative disk disease (DDD) and of lumbar back pain (LBP). However, application of quantitative sodium concentration measurements in
    MeSH term(s) Humans ; Intervertebral Disc/anatomy & histology ; Intervertebral Disc/diagnostic imaging ; Magnetic Resonance Imaging/methods ; Phantoms, Imaging ; Radio Waves ; Sodium
    Chemical Substances Sodium (9NEZ333N27)
    Language English
    Publishing date 2022-04-07
    Publishing country England
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 1000976-0
    ISSN 1099-1492 ; 0952-3480
    ISSN (online) 1099-1492
    ISSN 0952-3480
    DOI 10.1002/nbm.4733
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: 23

    Gomolka, Ryszard S / Ciritsis, Alexander / Rossi, Cristina

    Magnetic resonance in medicine

    2020  Volume 84, Issue 6, Page(s) 3300–3307

    Abstract: Purpose: The aim of the current study was to compare the reproducibility of sodium (: Methods: Measurements were performed in the phantom, consisting of 10 dm: Results: Image acquisition varied from 5:41 to 9:37 for TrueFISP and from 12:48 to 19: ... ...

    Abstract Purpose: The aim of the current study was to compare the reproducibility of sodium (
    Methods: Measurements were performed in the phantom, consisting of 10 dm
    Results: Image acquisition varied from 5:41 to 9:37 for TrueFISP and from 12:48 to 19:12 min for GRE using 20 and 30 spatial averages, respectively. Higher averaging increased the acquisition time by 53% and mean SNR at scan < 10%, without an effect on
    Conclusion: Both SR-TrueFISP and VFA-GRE provided similar
    MeSH term(s) Algorithms ; Image Enhancement ; Magnetic Resonance Imaging ; Phantoms, Imaging ; Reproducibility of Results
    Language English
    Publishing date 2020-06-16
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 605774-3
    ISSN 1522-2594 ; 0740-3194
    ISSN (online) 1522-2594
    ISSN 0740-3194
    DOI 10.1002/mrm.28333
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Fully automatic classification of automated breast ultrasound (ABUS) imaging according to BI-RADS using a deep convolutional neural network.

    Hejduk, Patryk / Marcon, Magda / Unkelbach, Jan / Ciritsis, Alexander / Rossi, Cristina / Borkowski, Karol / Boss, Andreas

    European radiology

    2022  Volume 32, Issue 7, Page(s) 4868–4878

    Abstract: Purpose: The aim of this study was to develop and test a post-processing technique for detection and classification of lesions according to the BI-RADS atlas in automated breast ultrasound (ABUS) based on deep convolutional neural networks (dCNNs).: ... ...

    Abstract Purpose: The aim of this study was to develop and test a post-processing technique for detection and classification of lesions according to the BI-RADS atlas in automated breast ultrasound (ABUS) based on deep convolutional neural networks (dCNNs).
    Methods and materials: In this retrospective study, 645 ABUS datasets from 113 patients were included; 55 patients had lesions classified as high malignancy probability. Lesions were categorized in BI-RADS 2 (no suspicion of malignancy), BI-RADS 3 (probability of malignancy < 3%), and BI-RADS 4/5 (probability of malignancy > 3%). A deep convolutional neural network was trained after data augmentation with images of lesions and normal breast tissue, and a sliding-window approach for lesion detection was implemented. The algorithm was applied to a test dataset containing 128 images and performance was compared with readings of 2 experienced radiologists.
    Results: Results of calculations performed on single images showed accuracy of 79.7% and AUC of 0.91 [95% CI: 0.85-0.96] in categorization according to BI-RADS. Moderate agreement between dCNN and ground truth has been achieved (κ: 0.57 [95% CI: 0.50-0.64]) what is comparable with human readers. Analysis of whole dataset improved categorization accuracy to 90.9% and AUC of 0.91 [95% CI: 0.77-1.00], while achieving almost perfect agreement with ground truth (κ: 0.82 [95% CI: 0.69-0.95]), performing on par with human readers. Furthermore, the object localization technique allowed the detection of lesion position slice-wise.
    Conclusions: Our results show that a dCNN can be trained to detect and distinguish lesions in ABUS according to the BI-RADS classification with similar accuracy as experienced radiologists.
    Key points: • A deep convolutional neural network (dCNN) was trained for classification of ABUS lesions according to the BI-RADS atlas. • A sliding-window approach allows accurate automatic detection and classification of lesions in ABUS examinations.
    MeSH term(s) Breast/diagnostic imaging ; Breast Neoplasms/diagnostic imaging ; Female ; Humans ; Neural Networks, Computer ; Retrospective Studies ; Ultrasonography, Mammary/methods
    Language English
    Publishing date 2022-02-11
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 1085366-2
    ISSN 1432-1084 ; 0938-7994 ; 1613-3749
    ISSN (online) 1432-1084
    ISSN 0938-7994 ; 1613-3749
    DOI 10.1007/s00330-022-08558-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article: Applied Machine Learning in Spiral Breast-CT: Can We Train a Deep Convolutional Neural Network for Automatic, Standardized and Observer Independent Classification of Breast Density?

    Landsmann, Anna / Wieler, Jann / Hejduk, Patryk / Ciritsis, Alexander / Borkowski, Karol / Rossi, Cristina / Boss, Andreas

    Diagnostics (Basel, Switzerland)

    2022  Volume 12, Issue 1

    Abstract: The aim of this study was to investigate the potential of a machine learning algorithm to accurately classify parenchymal density in spiral breast-CT (BCT), using a deep convolutional neural network (dCNN). In this retrospectively designed study, 634 ... ...

    Abstract The aim of this study was to investigate the potential of a machine learning algorithm to accurately classify parenchymal density in spiral breast-CT (BCT), using a deep convolutional neural network (dCNN). In this retrospectively designed study, 634 examinations of 317 patients were included. After image selection and preparation, 5589 images from 634 different BCT examinations were sorted by a four-level density scale, ranging from A to D, using ACR BI-RADS-like criteria. Subsequently four different dCNN models (differences in optimizer and spatial resolution) were trained (70% of data), validated (20%) and tested on a "real-world" dataset (10%). Moreover, dCNN accuracy was compared to a human readout. The overall performance of the model with lowest resolution of input data was highest, reaching an accuracy on the "real-world" dataset of 85.8%. The intra-class correlation of the dCNN and the two readers was almost perfect (0.92) and kappa values between both readers and the dCNN were substantial (0.71-0.76). Moreover, the diagnostic performance between the readers and the dCNN showed very good correspondence with an AUC of 0.89. Artificial Intelligence in the form of a dCNN can be used for standardized, observer-independent and reliable classification of parenchymal density in a BCT examination.
    Language English
    Publishing date 2022-01-13
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2662336-5
    ISSN 2075-4418
    ISSN 2075-4418
    DOI 10.3390/diagnostics12010181
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article: Neuroselling: applying neuroscience to selling for a new business perspective. An analysis on teleshopping advertising.

    Russo, Vincenzo / Bilucaglia, Marco / Casiraghi, Chiara / Chiarelli, Simone / Columbano, Martina / Fici, Alessandro / Rivetti, Fiamma / Rossi, Cristina / Valesi, Riccardo / Zito, Margherita

    Frontiers in psychology

    2023  Volume 14, Page(s) 1238879

    Abstract: This paper presents an innovative research project that aims to study the emotional factors influencing decision-making elicited by infomercials, a powerful sales technique that uses emotional communication to engage viewers, capture attention, and build ...

    Abstract This paper presents an innovative research project that aims to study the emotional factors influencing decision-making elicited by infomercials, a powerful sales technique that uses emotional communication to engage viewers, capture attention, and build trust. Using cutting-edge consumer neuroscience techniques, this study focuses on the identification of the variables that most impact the Call-to-Action and Purchase Intention. Forty participants were selected and divided into two groups, with each group exposed to one of two infomercials (condition A = male seller; condition B = female seller). EEG signals were recorded as well as Eye-tracking data. After the viewing, participants completed a self-report questionnaire. Results show that seller characteristics such as Performance and Trustworthiness, as well as Neurophysiological variables such as Approach-Withdrawal Index, Willingness to Pay, Attention and Engagement, significantly impact the final Call-to-Action, Purchase Intention, and infomercial Likeability responses. Moreover, eye-tracking data revealed that the more time is spent observing crucial areas of the infomercial, the more it will increase our Willingness to Pay and our interest and willingness to approach the infomercial and product. These findings highlight the importance of considering both the Seller attributes and the consumers' Neurophysiological responses to understand and predict their behaviors in response to marketing stimuli since they all seem to play a crucial role in shaping consumers' attitudes and purchase intentions. Overall, the study is a significant pilot in the new field of neuroselling, shedding light on crucial emotional aspects of the seller/buyer relationship and providing valuable insights for researchers and marketers.
    Language English
    Publishing date 2023-10-03
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2563826-9
    ISSN 1664-1078
    ISSN 1664-1078
    DOI 10.3389/fpsyg.2023.1238879
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article: Generalizable attention U-Net for segmentation of fibroglandular tissue and background parenchymal enhancement in breast DCE-MRI.

    Nowakowska, Sylwia / Borkowski, Karol / Ruppert, Carlotta M / Landsmann, Anna / Marcon, Magda / Berger, Nicole / Boss, Andreas / Ciritsis, Alexander / Rossi, Cristina

    Insights into imaging

    2023  Volume 14, Issue 1, Page(s) 185

    Abstract: Objectives: Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast- ... ...

    Abstract Objectives: Development of automated segmentation models enabling standardized volumetric quantification of fibroglandular tissue (FGT) from native volumes and background parenchymal enhancement (BPE) from subtraction volumes of dynamic contrast-enhanced breast MRI. Subsequent assessment of the developed models in the context of FGT and BPE Breast Imaging Reporting and Data System (BI-RADS)-compliant classification.
    Methods: For the training and validation of attention U-Net models, data coming from a single 3.0-T scanner was used. For testing, additional data from 1.5-T scanner and data acquired in a different institution with a 3.0-T scanner was utilized. The developed models were used to quantify the amount of FGT and BPE in 80 DCE-MRI examinations, and a correlation between these volumetric measures and the classes assigned by radiologists was performed.
    Results: To assess the model performance using application-relevant metrics, the correlation between the volumes of breast, FGT, and BPE calculated from ground truth masks and predicted masks was checked. Pearson correlation coefficients ranging from 0.963 ± 0.004 to 0.999 ± 0.001 were achieved. The Spearman correlation coefficient for the quantitative and qualitative assessment, i.e., classification by radiologist, of FGT amounted to 0.70 (p < 0.0001), whereas BPE amounted to 0.37 (p = 0.0006).
    Conclusions: Generalizable algorithms for FGT and BPE segmentation were developed and tested. Our results suggest that when assessing FGT, it is sufficient to use volumetric measures alone. However, for the evaluation of BPE, additional models considering voxels' intensity distribution and morphology are required.
    Critical relevance statement: A standardized assessment of FGT density can rely on volumetric measures, whereas in the case of BPE, the volumetric measures constitute, along with voxels' intensity distribution and morphology, an important factor.
    Key points: • Our work contributes to the standardization of FGT and BPE assessment. • Attention U-Net can reliably segment intricately shaped FGT and BPE structures. • The developed models were robust to domain shift.
    Language English
    Publishing date 2023-11-06
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2543323-4
    ISSN 1869-4101
    ISSN 1869-4101
    DOI 10.1186/s13244-023-01531-5
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Renal Arterial Spin Labeling Magnetic Resonance Imaging.

    Becker, Anton S / Rossi, Cristina

    Nephron

    2017  Volume 135, Issue 1, Page(s) 1–5

    Abstract: Arterial spin labeling (ASL) MRI allows the quantification of tissue perfusion without administration of exogenous contrast agents. Patients with reduced renal function or other contraindications to Gadolinium-based contrast media may benefit from the ... ...

    Abstract Arterial spin labeling (ASL) MRI allows the quantification of tissue perfusion without administration of exogenous contrast agents. Patients with reduced renal function or other contraindications to Gadolinium-based contrast media may benefit from the non-invasive monitoring of tissue microcirculation. So far, only few studies have investigated the sensitivity, the specificity and the reliability of the ASL techniques for the assessment of renal perfusion. Moreover, only little is known about the interplay between ASL markers of perfusion and functional renal filtration parameters. In this editorial, we discuss the main technical issues related to the quantification of renal perfusion by ASL and, in particular, the latest results in patients with kidney disorders.
    MeSH term(s) Carcinoma, Renal Cell/diagnostic imaging ; Healthy Volunteers ; Humans ; Kidney Neoplasms/diagnostic imaging ; Magnetic Resonance Angiography/methods ; Magnetic Resonance Angiography/statistics & numerical data ; Renal Artery/diagnostic imaging ; Renal Circulation/physiology ; Renal Insufficiency, Chronic/diagnostic imaging ; Spin Labels
    Chemical Substances Spin Labels
    Language English
    Publishing date 2017
    Publishing country Switzerland
    Document type Editorial
    ZDB-ID 207121-6
    ISSN 2235-3186 ; 1423-0186 ; 1660-8151 ; 0028-2766
    ISSN (online) 2235-3186 ; 1423-0186
    ISSN 1660-8151 ; 0028-2766
    DOI 10.1159/000450797
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Deep learning for the standardized classification of Ki-67 in vulva carcinoma: A feasibility study

    Choschzick, Matthias / Alyahiaoui, Mariam / Ciritsis, Alexander / Rossi, Cristina / Gut, André / Hejduk, Patryk / Boss, A.

    Heliyon. 2021 July, v. 7, no. 7 p.e07577-

    2021  

    Abstract: The aim of this study is to demonstrate the feasibility of automatic classification of Ki-67 histological immunostainings in patients with squamous cell carcinoma of the vulva using a deep convolutional neural network (dCNN). For evaluation of the dCNN, ... ...

    Abstract The aim of this study is to demonstrate the feasibility of automatic classification of Ki-67 histological immunostainings in patients with squamous cell carcinoma of the vulva using a deep convolutional neural network (dCNN). For evaluation of the dCNN, we used 55 well characterized squamous cell carcinomas of the vulva in a tissue microarray (TMA) format in this retrospective study. The tumor specimens were classified in 3 different categories C1 (0-2%), C2 (2-20%) and C3 (>20%), representing the relation of the number of KI-67 positive tumor cells to all cancer cells on the TMA spot. Representative areas of the spots were manually labeled by extracting images of 351 × 280 pixels. A dCNN with 13 convolutional layers was used for the evaluation. Two independent pathologists classified 45 labeled images in order to compare the dCNN's results to human readouts. Using a small labeled dataset with 1020 images with equal distribution among classes, the dCNN reached an accuracy of 90.9% (93%) for the training (validation) data. Applying a larger dataset with additional 1017 labeled images resulted in an accuracy of 96.1% (91.4%) for the training (validation) dataset. For the human readout, there were no significant differences between the pathologists and the dCNN in Ki-67 classification results. The dCNN is capable of a standardized classification of Ki-67 staining in vulva carcinoma; therefore, it may be suitable for quality control and standardization in the assessment of tumor grading.
    Keywords data collection ; feasibility studies ; histology ; humans ; microarray technology ; neural networks ; quality control ; retrospective studies ; squamous cell carcinoma ; vulva ; Ki-67 ; Vulva carcinoma ; Deep learning ; Convolutional neural network
    Language English
    Dates of publication 2021-07
    Publishing place Elsevier Ltd
    Document type Article ; Online
    Note Use and reproduction
    ZDB-ID 2835763-2
    ISSN 2405-8440
    ISSN 2405-8440
    DOI 10.1016/j.heliyon.2021.e07577
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

To top