LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 20

Search options

  1. Article: Construction and Verification of Predictive Model for Influencing Factors of Quality of Life in Patients with Type 2 Diabetic Nephropathy: A Hospital-Based Retrospective Study.

    Jiang, Haojun / Zhang, Hui / Zhang, Renzhong

    Archivos espanoles de urologia

    2023  Volume 76, Issue 6, Page(s) 418–424

    Abstract: Objective: The influencing factors of quality of life (QOL) in patients with type 2 diabetic nephropathy (T2DN) were explored, a practical risk prediction model was constructed and independent verification was conducted.: Methods: The clinical data ... ...

    Abstract Objective: The influencing factors of quality of life (QOL) in patients with type 2 diabetic nephropathy (T2DN) were explored, a practical risk prediction model was constructed and independent verification was conducted.
    Methods: The clinical data of 273 patients with T2DN in Tai'an Maternal and Child Health Care Center from February 2021 to February 2023 were used for retrospective analysis, and the patients were divided into modelling group (n = 173) and validation group (n = 100). According to 36-item short form health survey (SF-36) scores, the research subjects in the modelling group were divided further into poor group (n = 78) and good group (n = 95). Multivariate logistic regression was used in analysing the influencing factors of QOL and establishing a clinical prediction model based on the results. Then, a receiver operating characteristic (ROC) curve was used in evaluating the model's prediction efficiency.
    Results: Remarkable differences in age, duration of diabetes, presence or absence of hypertension, education level, exercise frequency and family monthly income were found among the patients (
    Conclusions: Age ≥60, duration of diabetes ≥3 years, presence of hypertension, education level of junior high school and below, no or little exercise and family monthly income <3500 yuan were independent influencing factors for poor QOL in patients with T2DN. The use of this model has certain clinical application value.
    MeSH term(s) Child ; Humans ; Child, Preschool ; Diabetic Nephropathies ; Retrospective Studies ; Quality of Life ; Models, Statistical ; Prognosis ; Hospitals ; Hypertension ; Diabetes Mellitus, Type 2/complications
    Language English
    Publishing date 2023-08-23
    Publishing country Spain
    Document type Journal Article
    ZDB-ID 211673-x
    ISSN 0004-0614
    ISSN 0004-0614
    DOI 10.56434/j.arch.esp.urol.20237606.51
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Chinese Immigrant Caregivers: Understanding Their Unmet Needs and the Co-Design of an mHealth App.

    Yu, Kexin / Jiang, Haojun / Liu, Mandong / Wu, Shinyi / Jordan-Marsh, Maryalice / Chi, Iris

    Canadian journal on aging = La revue canadienne du vieillissement

    2024  , Page(s) 1–8

    Abstract: Background: Immigrant caregivers support the aging population, yet their own needs are often neglected. Mobile technology-facilitated interventions can promote caregiver health by providing easy access to self-care materials.: Objective: This study ... ...

    Abstract Background: Immigrant caregivers support the aging population, yet their own needs are often neglected. Mobile technology-facilitated interventions can promote caregiver health by providing easy access to self-care materials.
    Objective: This study employed a design thinking framework to examine Chinese immigrant caregivers' (CICs) unmet self-care needs and co-design an app for promoting self-care with CICs.
    Methods: Nineteen semi-structured interviews were conducted in conceptual design and prototype co-design phases.
    Findings: Participants reported unmet self-care needs influenced by psychological and social barriers, immigrant status, and caregiving tasks. They expressed the need to learn to keep healthy boundaries with the care recipient and respond to emergencies. Gaining knowledge was the main benefit that drew CICs' interest in using the self-care app. However, potential barriers to use included issues of curriculum design, technology anxiety, limited free time, and caregiving burdens.
    Discussion: The co-design process appears to be beneficial in having participants voice both barriers and preferences.
    Language English
    Publishing date 2024-05-17
    Publishing country Canada
    Document type Journal Article
    ZDB-ID 632851-9
    ISSN 1710-1107 ; 0714-9808
    ISSN (online) 1710-1107
    ISSN 0714-9808
    DOI 10.1017/S0714980824000187
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Glance and Focus Networks for Dynamic Visual Recognition.

    Huang, Gao / Wang, Yulin / Lv, Kangchen / Jiang, Haojun / Huang, Wenhui / Qi, Pengfei / Song, Shiji

    IEEE transactions on pattern analysis and machine intelligence

    2023  Volume 45, Issue 4, Page(s) 4605–4621

    Abstract: Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models ...

    Abstract Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models which process all the pixels with an equal amount of computation result in considerable redundancy in terms of time and space consumption. In this paper, we formulate the image recognition problem as a sequential coarse-to-fine feature learning process, mimicking the human visual system. Specifically, the proposed Glance and Focus Network (GFNet) first extracts a quick global representation of the input image at a low resolution scale, and then strategically attends to a series of salient (small) regions to learn finer features. The sequential process naturally facilitates adaptive inference at test time, as it can be terminated once the model is sufficiently confident about its prediction, avoiding further redundant computation. It is worth noting that the problem of locating discriminant regions in our model is formulated as a reinforcement learning task, thus requiring no additional manual annotations other than classification labels. GFNet is general and flexible as it is compatible with any off-the-shelf backbone models (such as MobileNets, EfficientNets and TSM), which can be conveniently deployed as the feature extractor. Extensive experiments on a variety of image classification and video recognition tasks and with various backbone models demonstrate the remarkable efficiency of our method. For example, it reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 1.3x without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.
    Language English
    Publishing date 2023-03-07
    Publishing country United States
    Document type Journal Article
    ISSN 1939-3539
    ISSN (online) 1939-3539
    DOI 10.1109/TPAMI.2022.3196959
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: Joint Representation Learning for Text and 3D Point Cloud

    Huang, Rui / Pan, Xuran / Zheng, Henry / Jiang, Haojun / Xie, Zhifeng / Song, Shiji / Huang, Gao

    2023  

    Abstract: Recent advancements in vision-language pre-training (e.g. CLIP) have shown that vision models can benefit from language supervision. While many models using language modality have achieved great success on 2D vision tasks, the joint representation ... ...

    Abstract Recent advancements in vision-language pre-training (e.g. CLIP) have shown that vision models can benefit from language supervision. While many models using language modality have achieved great success on 2D vision tasks, the joint representation learning of 3D point cloud with text remains under-explored due to the difficulty of 3D-Text data pair acquisition and the irregularity of 3D data structure. In this paper, we propose a novel Text4Point framework to construct language-guided 3D point cloud models. The key idea is utilizing 2D images as a bridge to connect the point cloud and the language modalities. The proposed Text4Point follows the pre-training and fine-tuning paradigm. During the pre-training stage, we establish the correspondence of images and point clouds based on the readily available RGB-D data and use contrastive learning to align the image and point cloud representations. Together with the well-aligned image and text features achieved by CLIP, the point cloud features are implicitly aligned with the text embeddings. Further, we propose a Text Querying Module to integrate language information into 3D representation learning by querying text embeddings with point cloud features. For fine-tuning, the model learns task-specific 3D representations under informative language guidance from the label set without 2D images. Extensive experiments demonstrate that our model shows consistent improvement on various downstream tasks, such as point cloud semantic segmentation, instance segmentation, and object detection. The code will be available here: https://github.com/LeapLabTHU/Text4Point
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2023-01-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Pseudo-Q

    Jiang, Haojun / Lin, Yuanze / Han, Dongchen / Song, Shiji / Huang, Gao

    Generating Pseudo Language Queries for Visual Grounding

    2022  

    Abstract: Visual grounding, i.e., localizing objects in images according to natural language queries, is an important topic in visual language understanding. The most effective approaches for this task are based on deep learning, which generally require expensive ... ...

    Abstract Visual grounding, i.e., localizing objects in images according to natural language queries, is an important topic in visual language understanding. The most effective approaches for this task are based on deep learning, which generally require expensive manually labeled image-query or patch-query pairs. To eliminate the heavy dependence on human annotations, we present a novel method, named Pseudo-Q, to automatically generate pseudo language queries for supervised training. Our method leverages an off-the-shelf object detector to identify visual objects from unlabeled images, and then language queries for these objects are obtained in an unsupervised fashion with a pseudo-query generation module. Then, we design a task-related query prompt module to specifically tailor generated pseudo language queries for visual grounding tasks. Further, in order to fully capture the contextual relationships between images and language queries, we develop a visual-language model equipped with multi-level cross-modality attention mechanism. Extensive experimental results demonstrate that our method has two notable benefits: (1) it can reduce human annotation costs significantly, e.g., 31% on RefCOCO without degrading original model's performance under the fully supervised setting, and (2) without bells and whistles, it achieves superior or comparable performance compared to state-of-the-art weakly-supervised visual grounding methods on all the five datasets we have experimented. Code is available at https://github.com/LeapLabTHU/Pseudo-Q.

    Comment: Accepted by CVPR2022
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-03-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: Spatially Adaptive Feature Refinement for Efficient Inference.

    Han, Yizeng / Huang, Gao / Song, Shiji / Yang, Le / Zhang, Yitian / Jiang, Haojun

    IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

    2021  Volume 30, Page(s) 9345–9358

    Abstract: Spatial redundancy commonly exists in the learned representations of convolutional neural networks (CNNs), leading to unnecessary computation on high-resolution features. In this paper, we propose a novel Spatially Adaptive feature Refinement (SAR) ... ...

    Abstract Spatial redundancy commonly exists in the learned representations of convolutional neural networks (CNNs), leading to unnecessary computation on high-resolution features. In this paper, we propose a novel Spatially Adaptive feature Refinement (SAR) approach to reduce such superfluous computation. It performs efficient inference by adaptively fusing information from two branches: one conducts standard convolution on input features at a lower spatial resolution, and the other one selectively refines a set of regions at the original resolution. The two branches complement each other in feature learning, and both of them evoke much less computation than standard convolution. SAR is a flexible method that can be conveniently plugged into existing CNNs to establish models with reduced spatial redundancy. Experiments on CIFAR and ImageNet classification, COCO object detection and PASCAL VOC semantic segmentation tasks validate that the proposed SAR can consistently improve the network performance and efficiency. Notably, our results show that SAR only refines less than 40% of the regions in the feature representations of a ResNet for 97% of the samples in the validation set of ImageNet to achieve comparable accuracy with the original model, revealing the high computational redundancy in the spatial dimension of CNNs.
    MeSH term(s) Algorithms ; Neural Networks, Computer ; Semantics
    Language English
    Publishing date 2021-11-12
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0042
    ISSN (online) 1941-0042
    DOI 10.1109/TIP.2021.3125263
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Book ; Online: Deep Incubation

    Ni, Zanlin / Wang, Yulin / Yu, Jiangwei / Jiang, Haojun / Cao, Yue / Huang, Gao

    Training Large Models by Divide-and-Conquering

    2022  

    Abstract: Recent years have witnessed a remarkable success of large deep learning models. However, training these models is challenging due to high computational costs, painfully slow convergence, and overfitting issues. In this paper, we present Deep Incubation, ... ...

    Abstract Recent years have witnessed a remarkable success of large deep learning models. However, training these models is challenging due to high computational costs, painfully slow convergence, and overfitting issues. In this paper, we present Deep Incubation, a novel approach that enables the efficient and effective training of large models by dividing them into smaller sub-modules that can be trained separately and assembled seamlessly. A key challenge for implementing this idea is to ensure the compatibility of the independently trained sub-modules. To address this issue, we first introduce a global, shared meta model, which is leveraged to implicitly link all the modules together, and can be designed as an extremely small network with negligible computational overhead. Then we propose a module incubation algorithm, which trains each sub-module to replace the corresponding component of the meta model and accomplish a given learning task. Despite the simplicity, our approach effectively encourages each sub-module to be aware of its role in the target large model, such that the finally-learned sub-modules can collaborate with each other smoothly after being assembled. Empirically, our method outperforms end-to-end (E2E) training in terms of both final accuracy and training efficiency. For example, on top of ViT-Huge, it improves the accuracy by 2.7% on ImageNet or achieves similar performance with 4x less training time. Notably, the gains are significant for downstream tasks as well (e.g., object detection and image segmentation on COCO and ADE20K). Code is available at https://github.com/LeapLabTHU/Deep-Incubation.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-12-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Glance and Focus Networks for Dynamic Visual Recognition

    Huang, Gao / Wang, Yulin / Lv, Kangchen / Jiang, Haojun / Huang, Wenhui / Qi, Pengfei / Song, Shiji

    2022  

    Abstract: Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models ...

    Abstract Spatial redundancy widely exists in visual recognition tasks, i.e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand. Therefore, static models which process all the pixels with an equal amount of computation result in considerable redundancy in terms of time and space consumption. In this paper, we formulate the image recognition problem as a sequential coarse-to-fine feature learning process, mimicking the human visual system. Specifically, the proposed Glance and Focus Network (GFNet) first extracts a quick global representation of the input image at a low resolution scale, and then strategically attends to a series of salient (small) regions to learn finer features. The sequential process naturally facilitates adaptive inference at test time, as it can be terminated once the model is sufficiently confident about its prediction, avoiding further redundant computation. It is worth noting that the problem of locating discriminant regions in our model is formulated as a reinforcement learning task, thus requiring no additional manual annotations other than classification labels. GFNet is general and flexible as it is compatible with any off-the-shelf backbone models (such as MobileNets, EfficientNets and TSM), which can be conveniently deployed as the feature extractor. Extensive experiments on a variety of image classification and video recognition tasks and with various backbone models demonstrate the remarkable efficiency of our method. For example, it reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 1.3x without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.

    Comment: Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI). Journal version of arXiv:2010.05300 (NeurIPS 2020). The first two authors contributed equally
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-01-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: Detecting Coal Pulverizing System Anomaly Using a Gated Recurrent Unit and Clustering.

    Chen, Zian / Yan, Zhiyu / Jiang, Haojun / Que, Zijun / Gao, Guozhen / Xu, Zhengguo

    Sensors (Basel, Switzerland)

    2020  Volume 20, Issue 11

    Abstract: The coal pulverizing system is an important auxiliary system in thermal power generation systems. The working condition of a coal pulverizing system may directly affect the safety and economy of power generation. Prognostics and health management is an ... ...

    Abstract The coal pulverizing system is an important auxiliary system in thermal power generation systems. The working condition of a coal pulverizing system may directly affect the safety and economy of power generation. Prognostics and health management is an effective approach to ensure the reliability of coal pulverizing systems. As the coal pulverizing system is a typical dynamic and nonlinear high-dimensional system, it is difficult to construct accurate mathematical models used for anomaly detection. In this paper, a novel data-driven integrated framework for anomaly detection of the coal pulverizing system is proposed. A neural network model based on gated recurrent unit (GRU) networks, a type of recurrent neural network (RNN), is constructed to describe the temporal characteristics of high-dimensional data and predict the system condition value. Then, aiming at the prediction error, a novel unsupervised clustering algorithm for anomaly detection is proposed. The proposed framework is validated by a real case study from an industrial coal pulverizing system. The results show that the proposed framework can detect the anomaly successfully.
    Language English
    Publishing date 2020-06-08
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2052857-7
    ISSN 1424-8220 ; 1424-8220
    ISSN (online) 1424-8220
    ISSN 1424-8220
    DOI 10.3390/s20113271
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Noninvasive prenatal paternity testing by maternal plasma DNA sequencing in twin pregnancies

    Xie, Yifan / Zhu, Ning / Lin, Shaobin / Jiang, Haojun / Zhang, Yanyan / Zhang, Xiuqing / Liang, Wenfu / Chen, Fang / Ou, Xueling

    Electrophoresis. 2020 June, v. 41, no. 12 p.1095-1102

    2020  

    Abstract: SNPs, combined with massively parallel sequencing technology, have proven applicability in noninvasive prenatal paternity testing (NIPPT) for singleton pregnancies in our previous research, using circulating cell‐free DNA in maternal plasma. However, the ...

    Abstract SNPs, combined with massively parallel sequencing technology, have proven applicability in noninvasive prenatal paternity testing (NIPPT) for singleton pregnancies in our previous research, using circulating cell‐free DNA in maternal plasma. However, the feasibility of NIPPT in twin pregnancies has remained uncertain. As a pilot study, we developed a practical method to noninvasively determine the paternity of twin pregnancies by maternal plasma DNA sequencing based on a massively parallel sequencing platform. Blood samples were collected from 15 pregnant women (twin pregnancies at 9–18 weeks of gestation). Parental DNA and maternal plasma cell‐free DNA were analyzed with custom‐designed probes covering 5226 polymorphic SNP loci. A mathematical model for data interpretation was established, including the zygosity determination and paternity index calculations. Each plasma sample was independently tested against the alleged father and 90 unrelated males. As a result, the zygosity in each twin case was correctly determined, prior to paternity analysis. Further, the correct biological father was successfully identified, and the paternity of all 90 unrelated males was excluded in each case. Our study demonstrates that NIPPT can be performed for twin pregnancies. This finding may contribute to development in NIPPT and diagnosis of certain genetic diseases.
    Keywords DNA ; electrophoresis ; mathematical models ; paternity ; pregnancy
    Language English
    Dates of publication 2020-06
    Size p. 1095-1102.
    Publishing place John Wiley & Sons, Ltd
    Document type Article ; Online
    Note NAL-AP-2-clean ; JOURNAL ARTICLE
    ZDB-ID 619001-7
    ISSN 1522-2683 ; 0173-0835
    ISSN (online) 1522-2683
    ISSN 0173-0835
    DOI 10.1002/elps.202000036
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

To top