LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 23

Search options

  1. Article ; Online: Population-Based Hyperparameter Tuning With Multitask Collaboration.

    Li, Wendi / Wang, Ting / Ng, Wing W Y

    IEEE transactions on neural networks and learning systems

    2023  Volume 34, Issue 9, Page(s) 5719–5731

    Abstract: Population-based optimization methods are widely used for hyperparameter (HP) tuning for a given specific task. In this work, we propose the population-based hyperparameter tuning with multitask collaboration (PHTMC), which is a general multitask ... ...

    Abstract Population-based optimization methods are widely used for hyperparameter (HP) tuning for a given specific task. In this work, we propose the population-based hyperparameter tuning with multitask collaboration (PHTMC), which is a general multitask collaborative framework with parallel and sequential phases for population-based HP tuning methods. In the parallel HP tuning phase, a shared population for all tasks is kept and the intertask relatedness is considered to both yield a better generalization ability and avoid data bias to a single task. In the sequential HP tuning phase, a surrogate model is built for each new-added task so that the metainformation from the existing tasks can be extracted and used to help the initialization for the new task. Experimental results show significant improvements in generalization abilities yielded by neural networks trained using the PHTMC and better performances achieved by multitask metalearning. Moreover, a visualization of the solution distribution and the autoencoder's reconstruction of both the PHTMC and a single-task population-based HP tuning method is compared to analyze the property with the multitask collaboration.
    Language English
    Publishing date 2023-09-01
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2021.3130896
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: BASS: Broad Network Based on Localized Stochastic Sensitivity.

    Wang, Ting / Zhang, Mingyang / Zhang, Jianjun / Ng, Wing W Y / Chen, C L Philip

    IEEE transactions on neural networks and learning systems

    2024  Volume 35, Issue 2, Page(s) 1681–1695

    Abstract: The training of the standard broad learning system (BLS) concerns the optimization of its output weights via the minimization of both training mean square error (MSE) and a penalty term. However, it degrades the generalization capability and robustness ... ...

    Abstract The training of the standard broad learning system (BLS) concerns the optimization of its output weights via the minimization of both training mean square error (MSE) and a penalty term. However, it degrades the generalization capability and robustness of BLS when facing complex and noisy environments, especially when small perturbations or noise appear in input data. Therefore, this work proposes a broad network based on localized stochastic sensitivity (BASS) algorithm to tackle the issue of noise or input perturbations from a local perturbation perspective. The localized stochastic sensitivity (LSS) prompts an increase in the network's noise robustness by considering unseen samples located within a Q -neighborhood of training samples, which enhances the generalization capability of BASS with respect to noisy and perturbed data. Then, three incremental learning algorithms are derived to update BASS quickly when new samples arrive or the network is deemed to be expanded, without retraining the entire model. Due to the inherent superiorities of the LSS, extensive experimental results on 13 benchmark datasets show that BASS yields better accuracies on various regression and classification problems. For instance, BASS uses fewer parameters (12.6 million) to yield 1% higher Top-1 accuracy in comparison to AlexNet (60 million) on the large-scale ImageNet (ILSVRC2012) dataset.
    Language English
    Publishing date 2024-02-05
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2022.3184846
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: KNNENS: A k-Nearest Neighbor Ensemble-Based Method for Incremental Learning Under Data Stream With Emerging New Classes.

    Zhang, Jianjun / Wang, Ting / Ng, Wing W Y / Pedrycz, Witold

    IEEE transactions on neural networks and learning systems

    2023  Volume 34, Issue 11, Page(s) 9520–9527

    Abstract: In this brief, we investigate the problem of incremental learning under data stream with emerging new classes (SENC). In the literature, existing approaches encounter the following problems: 1) yielding high false positive for the new class; i) having ... ...

    Abstract In this brief, we investigate the problem of incremental learning under data stream with emerging new classes (SENC). In the literature, existing approaches encounter the following problems: 1) yielding high false positive for the new class; i) having long prediction time; and 3) having access to true labels for all instances, which is unrealistic and unacceptable in real-life streaming tasks. Therefore, we propose the k -Nearest Neighbor ENSemble-based method (KNNENS) to handle these problems. The KNNENS is effective to detect the new class and maintains high classification performance for known classes. It is also efficient in terms of run time and does not require true labels of new class instances for model update, which is desired in real-life streaming classification tasks. Experimental results show that the KNNENS achieves the best performance on four benchmark datasets and three real-world data streams in terms of accuracy and F1-measure and has a relatively fast run time compared to four reference methods. Codes are available at https://github.com/Ntriver/KNNENS.
    Language English
    Publishing date 2023-10-27
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2022.3149991
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Improving domain generalization by hybrid domain attention and localized maximum sensitivity.

    Ng, Wing W Y / Zhang, Qin / Zhong, Cankun / Zhang, Jianjun

    Neural networks : the official journal of the International Neural Network Society

    2023  Volume 171, Page(s) 320–331

    Abstract: Domain generalization has attracted much interest in recent years due to its practical application scenarios, in which the model is trained using data from various source domains but is tested using data from an unseen target domain. Existing domain ... ...

    Abstract Domain generalization has attracted much interest in recent years due to its practical application scenarios, in which the model is trained using data from various source domains but is tested using data from an unseen target domain. Existing domain generalization methods concern all visual features, including irrelevant ones with the same priority, which easily results in poor generalization performance of the trained model. In contrast, human beings have strong generalization capabilities to distinguish images from different domains by focusing on important features while suppressing irrelevant features with respect to labels. Motivated by this observation, we propose a channel-wise and spatial-wise hybrid domain attention mechanism to force the model to focus on more important features associated with labels in this work. In addition, models with higher robustness with respect to small perturbations of inputs are expected to have higher generalization capability, which is preferable in domain generalization. Therefore, we propose to reduce the localized maximum sensitivity of the small perturbations of inputs in order to improve the network's robustness and generalization capability. Extensive experiments on PACS, VLCS, and Office-Home datasets validate the effectiveness of the proposed method.
    MeSH term(s) Humans ; Generalization, Psychological ; Motivation
    Language English
    Publishing date 2023-12-14
    Publishing country United States
    Document type Journal Article
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2023.12.014
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Concept Preserving Hashing for Semantic Image Retrieval With Concept Drift.

    Tian, Xing / Ng, Wing W Y / Wang, Hui

    IEEE transactions on cybernetics

    2021  Volume 51, Issue 10, Page(s) 5184–5197

    Abstract: Current hashing-based image retrieval methods mostly assume that the database of images is static. However, this assumption is not true in cases where the databases are constantly updated (e.g., on the Internet) and there exists the problem of concept ... ...

    Abstract Current hashing-based image retrieval methods mostly assume that the database of images is static. However, this assumption is not true in cases where the databases are constantly updated (e.g., on the Internet) and there exists the problem of concept drift. The online (also known as incremental) hashing methods have been proposed recently for image retrieval where the database is not static. However, they have not considered the concept drift problem. Moreover, they update hash functions dynamically by generating new hash codes for all accumulated data over time which is clearly uneconomical. In order to solve these two problems, concept preserving hashing (CPH) is proposed. In contrast to the existing methods, CPH preserves the original concept, that is, the set of hash codes representing a concept is preserved over time, by learning a new set of hash functions to yield the same set of hash codes for images (old and new) of a concept. The objective function of CPH learning consists of three components: 1) isomorphic similarity; 2) hash codes partition balancing; and 3) heterogeneous similarity fitness. The experimental results on 11 concept drift scenarios show that CPH yields better retrieval precisions than the existing methods and does not need to update hash codes of previously stored images.
    Language English
    Publishing date 2021-10-12
    Publishing country United States
    Document type Journal Article
    ISSN 2168-2275
    ISSN (online) 2168-2275
    DOI 10.1109/TCYB.2019.2955130
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Difference-Guided Representation Learning Network for Multivariate Time-Series Classification.

    Ma, Qianli / Chen, Zipeng / Tian, Shuai / Ng, Wing W Y

    IEEE transactions on cybernetics

    2022  Volume 52, Issue 6, Page(s) 4717–4727

    Abstract: Multivariate time series (MTSs) are widely found in many important application fields, for example, medicine, multimedia, manufacturing, action recognition, and speech recognition. The accurate classification of MTS has become an important research topic. ...

    Abstract Multivariate time series (MTSs) are widely found in many important application fields, for example, medicine, multimedia, manufacturing, action recognition, and speech recognition. The accurate classification of MTS has become an important research topic. Traditional MTS classification methods do not explicitly model the temporal difference information of time series, which is, in fact, important and reflects the dynamic evolution information. In this article, the difference-guided representation learning network (DGRL-Net) is proposed to guide the representation learning of time series by dynamic evolution information. The DGRL-Net consists of a difference-guided layer and a multiscale convolutional layer. First, in the difference-guided layer, we propose a difference gating LSTM to model the time dependency and dynamic evolution of the time series to obtain feature representations of both raw and difference series. Then, these two representations are used as two input channels of the multiscale convolutional layer to extract multiscale information. Extensive experiments demonstrate that the proposed model outperforms state-of-the-art methods on 18 MTS benchmark datasets and achieves competitive results on two skeleton-based action recognition datasets. Furthermore, the ablation study and visualized analysis are designed to verify the effectiveness of the proposed model.
    MeSH term(s) Learning ; Time Factors
    Language English
    Publishing date 2022-06-16
    Publishing country United States
    Document type Journal Article
    ISSN 2168-2275
    ISSN (online) 2168-2275
    DOI 10.1109/TCYB.2020.3034755
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: HRadNet: A Hierarchical Radiomics-Based Network for Multicenter Breast Cancer Molecular Subtypes Prediction.

    Liang, Yinhao / Tang, Wenjie / Wang, Ting / Ng, Wing W Y / Chen, Siyi / Jiang, Kuiming / Wei, Xinhua / Jiang, Xinqing / Guo, Yuan

    IEEE transactions on medical imaging

    2024  Volume 43, Issue 3, Page(s) 1225–1236

    Abstract: Breast cancer is a heterogeneous disease, where molecular subtypes of breast cancer are closely related to the treatment and prognosis. Therefore, the goal of this work is to differentiate between luminal and non-luminal subtypes of breast cancer. The ... ...

    Abstract Breast cancer is a heterogeneous disease, where molecular subtypes of breast cancer are closely related to the treatment and prognosis. Therefore, the goal of this work is to differentiate between luminal and non-luminal subtypes of breast cancer. The hierarchical radiomics network (HRadNet) is proposed for breast cancer molecular subtypes prediction based on dynamic contrast-enhanced magnetic resonance imaging. HRadNet fuses multilayer features with the metadata of images to take advantage of conventional radiomics methods and general convolutional neural networks. A two-stage training mechanism is adopted to improve the generalization capability of the network for multicenter breast cancer data. The ablation study shows the effectiveness of each component of HRadNet. Furthermore, the influence of features from different layers and metadata fusion are also analyzed. It reveals that selecting certain layers of features for a specified domain can make further performance improvements. Experimental results on three data sets from different devices demonstrate the effectiveness of the proposed network. HRadNet also has good performance when transferring to other domains without fine-tuning.
    MeSH term(s) Humans ; Female ; Breast Neoplasms/diagnostic imaging ; Breast Neoplasms/pathology ; Radiomics ; Neural Networks, Computer ; Magnetic Resonance Imaging/methods ; Contrast Media ; Retrospective Studies
    Chemical Substances Contrast Media
    Language English
    Publishing date 2024-03-05
    Publishing country United States
    Document type Multicenter Study ; Journal Article
    ZDB-ID 622531-7
    ISSN 1558-254X ; 0278-0062
    ISSN (online) 1558-254X
    ISSN 0278-0062
    DOI 10.1109/TMI.2023.3331301
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: LiSSA: Localized Stochastic Sensitive Autoencoders.

    Wang, Ting / Ng, Wing W Y / Pelillo, Marcello / Kwong, Sam

    IEEE transactions on cybernetics

    2021  Volume 51, Issue 5, Page(s) 2748–2760

    Abstract: The training of autoencoder (AE) focuses on the selection of connection weights via a minimization of both the training error and a regularized term. However, the ultimate goal of AE training is to autoencode future unseen samples correctly (i.e., good ... ...

    Abstract The training of autoencoder (AE) focuses on the selection of connection weights via a minimization of both the training error and a regularized term. However, the ultimate goal of AE training is to autoencode future unseen samples correctly (i.e., good generalization). Minimizing the training error with different regularized terms only indirectly minimizes the generalization error. Moreover, the trained model may not be robust to small perturbations of inputs which may lead to a poor generalization capability. In this paper, we propose a localized stochastic sensitive AE (LiSSA) to enhance the robustness of AE with respect to input perturbations. With the local stochastic sensitivity regularization, LiSSA reduces sensitivity to unseen samples with small differences (perturbations) from training samples. Meanwhile, LiSSA preserves the local connectivity from the original input space to the representation space that learns a more robustness features (intermediate representation) for unseen samples. The classifier using these learned features yields a better generalization capability. Extensive experimental results on 36 benchmarking datasets indicate that LiSSA outperforms several classical and recent AE training methods significantly on classification tasks.
    Language English
    Publishing date 2021-04-15
    Publishing country United States
    Document type Journal Article
    ISSN 2168-2275
    ISSN (online) 2168-2275
    DOI 10.1109/TCYB.2019.2923756
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: Hashing-Based Undersampling Ensemble for Imbalanced Pattern Classification Problems.

    Ng, Wing W Y / Xu, Shichao / Zhang, Jianjun / Tian, Xing / Rong, Tongwen / Kwong, Sam

    IEEE transactions on cybernetics

    2022  Volume 52, Issue 2, Page(s) 1269–1279

    Abstract: Undersampling is a popular method to solve imbalanced classification problems. However, sometimes it may remove too many majority samples which may lead to loss of informative samples. In this article, the hashing-based undersampling ensemble (HUE) is ... ...

    Abstract Undersampling is a popular method to solve imbalanced classification problems. However, sometimes it may remove too many majority samples which may lead to loss of informative samples. In this article, the hashing-based undersampling ensemble (HUE) is proposed to deal with this problem by constructing diversified training subspaces for undersampling. Samples in the majority class are divided into many subspaces by a hashing method. Each subspace corresponds to a training subset which consists of most of the samples from this subspace and a few samples from surrounding subspaces. These training subsets are used to train an ensemble of classification and regression tree classifiers with all minority class samples. The proposed method is tested on 25 UCI datasets against state-of-the-art methods. Experimental results show that the HUE outperforms other methods and yields good results on highly imbalanced datasets.
    MeSH term(s) Algorithms ; Research Design
    Language English
    Publishing date 2022-02-16
    Publishing country United States
    Document type Journal Article
    ISSN 2168-2275
    ISSN (online) 2168-2275
    DOI 10.1109/TCYB.2020.3000754
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: Generative face inpainting hashing for occluded face retrieval.

    Yang, Yuxiang / Tian, Xing / Ng, Wing W Y / Wang, Ran / Gao, Ying / Kwong, Sam

    International journal of machine learning and cybernetics

    2022  Volume 14, Issue 5, Page(s) 1725–1738

    Abstract: COVID-19 has resulted in a significant impact on individual lives, bringing a unique challenge for face retrieval under occlusion. In this paper, an occluded face retrieval method which consists of generator, discriminator, and deep hashing retrieval ... ...

    Abstract COVID-19 has resulted in a significant impact on individual lives, bringing a unique challenge for face retrieval under occlusion. In this paper, an occluded face retrieval method which consists of generator, discriminator, and deep hashing retrieval network is proposed for face retrieval in a large-scale face image dataset under variety of occlusion situations. In the proposed method, occluded face images are firstly reconstructed using a face inpainting model, in which the adversarial loss, reconstruction loss and hash bits loss are combined for training. With the trained model, hash codes of real face images and corresponding reconstructed face images are aimed to be as similar as possible. Then, a deep hashing retrieval network is used to generate compact similarity-preserving hashing codes using reconstructed face images for a better retrieval performance. Experimental results show that the proposed method can successfully generate the reconstructed face images under occlusion. Meanwhile, the proposed deep hashing retrieval network achieves better retrieval performance for occluded face retrieval than existing state-of-the-art deep hashing retrieval methods.
    Language English
    Publishing date 2022-12-02
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 2572473-3
    ISSN 1868-808X ; 1868-8071
    ISSN (online) 1868-808X
    ISSN 1868-8071
    DOI 10.1007/s13042-022-01723-3
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top