LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 27

Search options

  1. Article ; Online: Research of wet string grid dust removal vehicle and creation of dust control area on tunnel working face.

    Deng, Huan / Chen, Shiqiang / Huang, Junxin / Wu, Zhirong / Rao, Ying / Qiu, Xinyi / Cheng, Jiujun

    Scientific reports

    2024  Volume 14, Issue 1, Page(s) 8292

    Abstract: The spread of blast dust throughout the tunnel becomes a common problem in drill and blast tunneling,the key to breaking through the problem is the creation of a dust control area on the working face.In view of this key problem, a wet string grid dust ... ...

    Abstract The spread of blast dust throughout the tunnel becomes a common problem in drill and blast tunneling,the key to breaking through the problem is the creation of a dust control area on the working face.In view of this key problem, a wet string grid dust removal crawler vehicle was developed, the power of the vehicle came from the diesel generator, and further, the air cooler of the diesel generator was used to generate airflow, and the suction process formed by the on-board axial flow fan was coupled to create a dust control area of the working face after blasting.The results show that when the frequency of the axial flow fan is adjusted to 30 Hz, the airflow speed of the wet chord grid section reaches 3.34 m/s, and the dust removal efficiency is the highest, with a value of 94.3%.Compared with the non-use of the dust removal vehicle, when the air outlet of the air cooler is front, horizontal front, horizontal rear, the dust concentration is reduced by 74.37, 92.39 and 50.53%.Finally, the optimized wet grid dust removal crawler was installed in the Dading tunnel, and the actual dust reduction efficiency was about 78.49%. The results obtained provide an important technical way to improve the working environment of the drilling and blasting construction tunnel.
    Language English
    Publishing date 2024-04-09
    Publishing country England
    Document type Journal Article
    ZDB-ID 2615211-3
    ISSN 2045-2322 ; 2045-2322
    ISSN (online) 2045-2322
    ISSN 2045-2322
    DOI 10.1038/s41598-024-57748-x
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Bootstrapping Objectness from Videos by Relaxed Common Fate and Visual Grouping

    Lian, Long / Wu, Zhirong / Yu, Stella X.

    2023  

    Abstract: We study learning object segmentation from unlabeled videos. Humans can easily segment moving objects without knowing what they are. The Gestalt law of common fate, i.e., what move at the same speed belong together, has inspired unsupervised object ... ...

    Abstract We study learning object segmentation from unlabeled videos. Humans can easily segment moving objects without knowing what they are. The Gestalt law of common fate, i.e., what move at the same speed belong together, has inspired unsupervised object discovery based on motion segmentation. However, common fate is not a reliable indicator of objectness: Parts of an articulated / deformable object may not move at the same speed, whereas shadows / reflections of an object always move with it but are not part of it. Our insight is to bootstrap objectness by first learning image features from relaxed common fate and then refining them based on visual appearance grouping within the image itself and across images statistically. Specifically, we learn an image segmenter first in the loop of approximating optical flow with constant segment flow plus small within-segment residual flow, and then by refining it for more coherent appearance and statistical figure-ground relevance. On unsupervised video object segmentation, using only ResNet and convolutional heads, our model surpasses the state-of-the-art by absolute gains of 7/9/5% on DAVIS16 / STv2 / FBMS59 respectively, demonstrating the effectiveness of our ideas. Our code is publicly available.

    Comment: Accepted by CVPR 2023. An extension of preprint 2212.08816. 19 pages, 11 figures
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2023-04-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Exploring Transferability for Randomized Smoothing

    Qiu, Kai / Zhang, Huishuai / Wu, Zhirong / Lin, Stephen

    2023  

    Abstract: Training foundation models on extensive datasets and then finetuning them on specific tasks has emerged as the mainstream approach in artificial intelligence. However, the model robustness, which is a critical aspect for safety, is often optimized for ... ...

    Abstract Training foundation models on extensive datasets and then finetuning them on specific tasks has emerged as the mainstream approach in artificial intelligence. However, the model robustness, which is a critical aspect for safety, is often optimized for each specific task rather than at the pretraining stage. In this paper, we propose a method for pretraining certifiably robust models that can be readily finetuned for adaptation to a particular task. A key challenge is dealing with the compromise between semantic learning and robustness. We address this with a simple yet highly effective strategy based on significantly broadening the pretraining data distribution, which is shown to greatly benefit finetuning for downstream tasks. Through pretraining on a mixture of clean and various noisy images, we find that surprisingly strong certified accuracy can be achieved even when finetuning on only clean images. Furthermore, this strategy requires just a single model to deal with various noise levels, thus substantially reducing computational costs in relation to previous works that employ multiple models. Despite using just one model, our method can still yield results that are on par with, or even superior to, existing multi-model methods.
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-12-14
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Research progress on the etiology, clinical examination and treatment of peri ⁃ implantitis

    WU Zhirong / HUANG Shiguang

    口腔疾病防治, Vol 26, Iss 6, Pp 401-

    2018  Volume 405

    Abstract: Peri⁃implantitis is an inflammatory disease that occurs around dental implants and damages both soft and hard tissues, the characteristic feature of which is bone loss. The major etiology of peri⁃implantitis is dental plaque, in⁃ cluding implant overload ...

    Abstract Peri⁃implantitis is an inflammatory disease that occurs around dental implants and damages both soft and hard tissues, the characteristic feature of which is bone loss. The major etiology of peri⁃implantitis is dental plaque, in⁃ cluding implant overload implants, a history of periodontitis, smoking and diabetes as risk factors. The standards for the clinical diagnosis of peri⁃implantitis are bleeding on probing, suppuration, a peri⁃implant pocket depth ≥5 mm, and X⁃ ray evidence. Treatment includes mechanical debridement, drug therapy, laser treatment and surgical treatment. Regu⁃ lar supportive peri⁃implant therapy can be effective for curing and preventing peri⁃implantitis. In this paper, the etiolo⁃ gy, clinical examination and treatment of periodontitis are reviewed.
    Keywords Peri⁃implantitis ; Microbe ; Risk factors ; Diagnosis ; Treatment ; Medicine ; R
    Language Chinese
    Publishing date 2018-06-01T00:00:00Z
    Publisher Editorial Department of Journal of Prevention and Treatment for Stomatological Diseases
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Comparative study of fault tree analysis and 24Model: taking the cause analysis of the Quanzhou Xinjia Hotel collapse accident as an example.

    Yuan, Chenhui / Fu, Gui / Zhao, Jinkun / Wu, Zhirong / Lyu, Qian / Wang, Yuxin

    International journal of occupational safety and ergonomics : JOSE

    2023  Volume 30, Issue 1, Page(s) 108–118

    Abstract: A comparative study was conducted to compare the accident cause analysis methods of fault tree analysis (FTA) and 24Model. A major accident - the Xinjia Hotel collapse accident - was selected as the research object, the causes of the accident were ... ...

    Abstract A comparative study was conducted to compare the accident cause analysis methods of fault tree analysis (FTA) and 24Model. A major accident - the Xinjia Hotel collapse accident - was selected as the research object, the causes of the accident were reanalysed and accident prevention countermeasures were designed based on 24Model and FTA, respectively, and the systematic characteristics of 24Model were summarized. The research shows that both 24Model and FTA can carry out risk assessment, accident cause analysis and preventive countermeasure design based on their own rules. Different from FTA, 24Model has static and dynamic structures of specific forms, the definition of causes and factors in the model is more comprehensive and the analysis method is more hierarchical and normative. 24Model can analyse the deep-level cultural and system causes, but the analysis process does not use quantitative methods, only qualitative methods. 24Model has eight systematic characteristics, such as integrity, hierarchy and dynamics.
    MeSH term(s) Humans ; Accidents ; Accident Prevention ; Risk Assessment/methods
    Language English
    Publishing date 2023-10-03
    Publishing country England
    Document type Journal Article
    ZDB-ID 1335568-5
    ISSN 2376-9130 ; 1080-3548
    ISSN (online) 2376-9130
    ISSN 1080-3548
    DOI 10.1080/10803548.2023.2259698
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Associative Transformer

    Sun, Yuwei / Ochiai, Hideya / Wu, Zhirong / Lin, Stephen / Kanai, Ryota

    2023  

    Abstract: Emerging from the pairwise attention in conventional Transformers, there is a growing interest in sparse attention mechanisms that align more closely with localized, contextual learning in the biological brain. Existing studies such as the Coordination ... ...

    Abstract Emerging from the pairwise attention in conventional Transformers, there is a growing interest in sparse attention mechanisms that align more closely with localized, contextual learning in the biological brain. Existing studies such as the Coordination method employ iterative cross-attention mechanisms with a bottleneck to enable the sparse association of inputs. However, these methods are parameter inefficient and fail in more complex relational reasoning tasks. To this end, we propose Associative Transformer (AiT) to enhance the association among sparsely attended input patches, improving parameter efficiency and performance in relational reasoning tasks. AiT leverages a learnable explicit memory, comprised of various specialized priors, with a bottleneck attention to facilitate the extraction of diverse localized features. Moreover, we propose a novel associative memory-enabled patch reconstruction with a Hopfield energy function. The extensive experiments in four image classification tasks with three different sizes of AiT demonstrate that AiT requires significantly fewer parameters and attention layers while outperforming Vision Transformers and a broad range of sparse Transformers. Additionally, AiT establishes new SOTA performance in the Sort-of-CLEVR dataset, outperforming the previous Coordination method.
    Keywords Computer Science - Machine Learning ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Neural and Evolutionary Computing
    Subject code 004
    Publishing date 2023-09-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Extreme Masking for Learning Instance and Distributed Visual Representations

    Wu, Zhirong / Lai, Zihang / Sun, Xiao / Lin, Stephen

    2022  

    Abstract: The paper presents a scalable approach for learning spatially distributed visual representations over individual tokens and a holistic instance representation simultaneously. We use self-attention blocks to represent spatially distributed tokens, ... ...

    Abstract The paper presents a scalable approach for learning spatially distributed visual representations over individual tokens and a holistic instance representation simultaneously. We use self-attention blocks to represent spatially distributed tokens, followed by cross-attention blocks to aggregate the holistic image instance. The core of the approach is the use of extremely large token masking (75\%-90\%) as the data augmentation for supervision. Our model, named ExtreMA, follows the plain BYOL approach where the instance representation from the unmasked subset is trained to predict that from the intact input. Instead of encouraging invariance across inputs, the model is required to capture informative variations in an image. The paper makes three contributions: 1) It presents random masking as a strong and computationally efficient data augmentation for siamese representation learning. 2) With multiple sampling per instance, extreme masking greatly speeds up learning and improves performance with more data. 3) ExtreMA obtains stronger linear probing performance than masked modeling methods, and better transfer performance than prior contrastive models.

    Comment: Accepted in TMLR
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2022-06-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Improving Unsupervised Video Object Segmentation with Motion-Appearance Synergy

    Lian, Long / Wu, Zhirong / Yu, Stella X.

    2022  

    Abstract: We present IMAS, a method that segments the primary objects in videos without manual annotation in training or inference. Previous methods in unsupervised video object segmentation (UVOS) have demonstrated the effectiveness of motion as either input or ... ...

    Abstract We present IMAS, a method that segments the primary objects in videos without manual annotation in training or inference. Previous methods in unsupervised video object segmentation (UVOS) have demonstrated the effectiveness of motion as either input or supervision for segmentation. However, motion signals may be uninformative or even misleading in cases such as deformable objects and objects with reflections, causing unsatisfactory segmentation. In contrast, IMAS achieves Improved UVOS with Motion-Appearance Synergy. Our method has two training stages: 1) a motion-supervised object discovery stage that deals with motion-appearance conflicts through a learnable residual pathway; 2) a refinement stage with both low- and high-level appearance supervision to correct model misconceptions learned from misleading motion cues. Additionally, we propose motion-semantic alignment as a model-agnostic annotation-free hyperparam tuning method. We demonstrate its effectiveness in tuning critical hyperparams previously tuned with human annotation or hand-crafted hyperparam-specific metrics. IMAS greatly improves the segmentation quality on several common UVOS benchmarks. For example, we surpass previous methods by 8.3% on DAVIS16 benchmark with only standard ResNet and convolutional heads. We intend to release our code for future research and applications.

    Comment: 15 pages, 10 figures
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 004
    Publishing date 2022-12-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: ImTooth: Neural Implicit Tooth for Dental Augmented Reality.

    Li, Hai / Zhai, Hongjia / Yang, Xingrui / Wu, Zhirong / Wu, Jianchao / Bao, Hujun / Zheng, Yihao / Wang, Haofan / Zhang, Guofeng

    IEEE transactions on visualization and computer graphics

    2023  Volume PP

    Abstract: The combination of augmented reality (AR) and medicine is an important trend in current research. The powerful display and interaction capabilities of the AR system can assist doctors to perform more complex operations. Since the tooth itself is an ... ...

    Abstract The combination of augmented reality (AR) and medicine is an important trend in current research. The powerful display and interaction capabilities of the AR system can assist doctors to perform more complex operations. Since the tooth itself is an exposed rigid body structure, dental AR is a relatively hot research direction with application potential. However, none of the existing dental AR solutions are designed for wearable AR devices such as AR glasses. At the same time, these methods rely on high-precision scanning equipment or auxiliary positioning markers, which greatly increases the operational complexity and cost of clinical AR. In this work, we propose a simple and accurate neural-implicit model-driven dental AR system, named ImTooth, and adapted for AR glasses. Based on the modeling capabilities and differentiable optimization properties of state-of-the-art neural implicit representations, our system fuses reconstruction and registration in a single network, greatly simplifying the existing dental AR solutions and enabling reconstruction, registration, and interaction. Specifically, our method learns a scale-preserving voxel-based neural implicit model from multi-view images captured from a textureless plaster model of the tooth. Apart from color and surface, we also learn the consistent edge feature inside our representation. By leveraging the depth and edge information, our system can register the model to real images without additional training. In practice, our system uses a single Microsoft HoloLens 2 as the only sensor and display device. Experiments show that our method can reconstruct high-precision models and accomplish accurate registration. It is also robust to weak, repeating and inconsistent textures. We also show that our system can be easily integrated into dental diagnostic and therapeutic procedures, such as bracket placement guidance.
    Language English
    Publishing date 2023-02-23
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2023.3247459
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Book ; Online: Debiased Learning from Naturally Imbalanced Pseudo-Labels

    Wang, Xudong / Wu, Zhirong / Lian, Long / Yu, Stella X.

    2022  

    Abstract: Pseudo-labels are confident predictions made on unlabeled target data by a classifier trained on labeled source data. They are widely used for adapting a model to unlabeled data, e.g., in a semi-supervised learning setting. Our key insight is that pseudo- ...

    Abstract Pseudo-labels are confident predictions made on unlabeled target data by a classifier trained on labeled source data. They are widely used for adapting a model to unlabeled data, e.g., in a semi-supervised learning setting. Our key insight is that pseudo-labels are naturally imbalanced due to intrinsic data similarity, even when a model is trained on balanced source data and evaluated on balanced target data. If we address this previously unknown imbalanced classification problem arising from pseudo-labels instead of ground-truth training labels, we could remove model biases towards false majorities created by pseudo-labels. We propose a novel and effective debiased learning method with pseudo-labels, based on counterfactual reasoning and adaptive margins: The former removes the classifier response bias, whereas the latter adjusts the margin of each class according to the imbalance of pseudo-labels. Validated by extensive experimentation, our simple debiased learning delivers significant accuracy gains over the state-of-the-art on ImageNet-1K: 26% for semi-supervised learning with 0.2% annotations and 9% for zero-shot learning. Our code is available at: https://github.com/frank-xwang/debiased-pseudo-labeling.

    Comment: Accepted by CVPR 2022
    Keywords Computer Science - Machine Learning ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2022-01-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top