LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 45

Search options

  1. Article ; Online: New DRASTIC framework for groundwater vulnerability assessment: bivariate and multi-criteria decision-making approach coupled with metaheuristic algorithm

    Lakshminarayanan, Balaji / Ramasamy, Saravanan / Anuthaman, Sreemanthrarupini Nariangadu / Karuppanan, Saravanan

    Environ Sci Pollut Res. 2022 Jan., v. 29, no. 3 p.4474-4496

    2022  

    Abstract: Unplanned anthropogenic activities and erratic climate events pose serious threats to groundwater contamination. Therefore, the vulnerability assessment model becomes an essential tool for proper planning and protection of this precious resource. DRASTIC ...

    Abstract Unplanned anthropogenic activities and erratic climate events pose serious threats to groundwater contamination. Therefore, the vulnerability assessment model becomes an essential tool for proper planning and protection of this precious resource. DRASTIC is an extensively adopted groundwater vulnerability assessment model that suffers from several shortcomings in its assessment due to the subjectivity of its rates and weights. In this paper, a new framework was developed to address the subjectivity of DRASTIC model using a bivariate, multi-criteria decision-making approach coupled with a metaheuristic algorithm. Shannon entropy (SE) and stepwise weight assessment ratio analysis (SWARA) methods were coupled with biogeography-based optimization (BBO) to modify rates and weights. The performance of developed models was assessed using area under the receiver operating characteristic (AU-ROC) curve and weighted F1 score. The Shannon-MH model yields better results with an AUC value of 0.8249, whereas other models resulted in an AUC value of 0.8186, 0.7714, 0.7672, and 0.7378 for SWARA-MH, SWARA, SE, and original DRASTIC models, respectively. It is also evident from weighted F1 score that Shannon-MH model produced maximum accuracy with a value of 0.452 followed by 0.437, 0.419, 0.370, and 0.234 for SWARA-MH, SWARA, SE, and original DRASTIC models, respectively. The results indicated that Shannon model coupled with metaheuristic algorithm outperforms other developed models in groundwater vulnerability assessment.
    Keywords algorithms ; climate ; entropy ; groundwater ; groundwater contamination ; models ; multi-criteria decision making ; risk assessment
    Language English
    Dates of publication 2022-01
    Size p. 4474-4496.
    Publishing place Springer Berlin Heidelberg
    Document type Article ; Online
    ZDB-ID 1178791-0
    ISSN 1614-7499 ; 0944-1344
    ISSN (online) 1614-7499
    ISSN 0944-1344
    DOI 10.1007/s11356-021-15966-0
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

  2. Article ; Online: New DRASTIC framework for groundwater vulnerability assessment: bivariate and multi-criteria decision-making approach coupled with metaheuristic algorithm.

    Lakshminarayanan, Balaji / Ramasamy, Saravanan / Anuthaman, Sreemanthrarupini Nariangadu / Karuppanan, Saravanan

    Environmental science and pollution research international

    2021  Volume 29, Issue 3, Page(s) 4474–4496

    Abstract: Unplanned anthropogenic activities and erratic climate events pose serious threats to groundwater contamination. Therefore, the vulnerability assessment model becomes an essential tool for proper planning and protection of this precious resource. DRASTIC ...

    Abstract Unplanned anthropogenic activities and erratic climate events pose serious threats to groundwater contamination. Therefore, the vulnerability assessment model becomes an essential tool for proper planning and protection of this precious resource. DRASTIC is an extensively adopted groundwater vulnerability assessment model that suffers from several shortcomings in its assessment due to the subjectivity of its rates and weights. In this paper, a new framework was developed to address the subjectivity of DRASTIC model using a bivariate, multi-criteria decision-making approach coupled with a metaheuristic algorithm. Shannon entropy (SE) and stepwise weight assessment ratio analysis (SWARA) methods were coupled with biogeography-based optimization (BBO) to modify rates and weights. The performance of developed models was assessed using area under the receiver operating characteristic (AU-ROC) curve and weighted F1 score. The Shannon-MH model yields better results with an AUC value of 0.8249, whereas other models resulted in an AUC value of 0.8186, 0.7714, 0.7672, and 0.7378 for SWARA-MH, SWARA, SE, and original DRASTIC models, respectively. It is also evident from weighted F1 score that Shannon-MH model produced maximum accuracy with a value of 0.452 followed by 0.437, 0.419, 0.370, and 0.234 for SWARA-MH, SWARA, SE, and original DRASTIC models, respectively. The results indicated that Shannon model coupled with metaheuristic algorithm outperforms other developed models in groundwater vulnerability assessment.
    MeSH term(s) Algorithms ; Anthropogenic Effects ; Environmental Monitoring ; Groundwater ; Models, Theoretical
    Language English
    Publishing date 2021-08-18
    Publishing country Germany
    Document type Journal Article
    ZDB-ID 1178791-0
    ISSN 1614-7499 ; 0944-1344
    ISSN (online) 1614-7499
    ISSN 0944-1344
    DOI 10.1007/s11356-021-15966-0
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel

    Qin, Yao / Wang, Xuezhi / Lakshminarayanan, Balaji / Chi, Ed H. / Beutel, Alex

    2023  

    Abstract: A wide breadth of research has devised data augmentation approaches that can improve both accuracy and generalization performance for neural networks. However, augmented data can end up being far from the clean training data and what is the appropriate ... ...

    Abstract A wide breadth of research has devised data augmentation approaches that can improve both accuracy and generalization performance for neural networks. However, augmented data can end up being far from the clean training data and what is the appropriate label is less clear. Despite this, most existing work simply uses one-hot labels for augmented data. In this paper, we show re-using one-hot labels for highly distorted data might run the risk of adding noise and degrading accuracy and calibration. To mitigate this, we propose a generic method AutoLabel to automatically learn the confidence in the labels for augmented data, based on the transformation distance between the clean distribution and augmented distribution. AutoLabel is built on label smoothing and is guided by the calibration-performance over a hold-out validation set. We successfully apply AutoLabel to three different data augmentation techniques: the state-of-the-art RandAug, AugMix, and adversarial training. Experiments on CIFAR-10, CIFAR-100 and ImageNet show that AutoLabel significantly improves existing data augmentation techniques over models' calibration and accuracy, especially under distributional shift.

    Comment: Accepted to SaTML-2023
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-02-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Building One-class Detector for Anything

    Ge, Yunhao / Ren, Jie / Zhao, Jiaping / Chen, Kaifeng / Gallagher, Andrew / Itti, Laurent / Lakshminarayanan, Balaji

    Open-vocabulary Zero-shot OOD Detection Using Text-image Models

    2023  

    Abstract: We focus on the challenge of out-of-distribution (OOD) detection in deep learning models, a crucial aspect in ensuring reliability. Despite considerable effort, the problem remains significantly challenging in deep learning models due to their propensity ...

    Abstract We focus on the challenge of out-of-distribution (OOD) detection in deep learning models, a crucial aspect in ensuring reliability. Despite considerable effort, the problem remains significantly challenging in deep learning models due to their propensity to output over-confident predictions for OOD inputs. We propose a novel one-class open-set OOD detector that leverages text-image pre-trained models in a zero-shot fashion and incorporates various descriptions of in-domain and OOD. Our approach is designed to detect anything not in-domain and offers the flexibility to detect a wide variety of OOD, defined via fine- or coarse-grained labels, or even in natural language. We evaluate our approach on challenging benchmarks including large-scale datasets containing fine-grained, semantically similar classes, distributionally shifted images, and multi-object images containing a mixture of in-domain and OOD objects. Our method shows superior performance over previous methods on all benchmarks. Code is available at https://github.com/gyhandy/One-Class-Anything

    Comment: 16 pages (including appendix and references), 3 figures
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-05-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Morse Neural Networks for Uncertainty Quantification

    Dherin, Benoit / Hu, Huiyi / Ren, Jie / Dusenberry, Michael W. / Lakshminarayanan, Balaji

    2023  

    Abstract: We introduce a new deep generative model useful for uncertainty quantification: the Morse neural network, which generalizes the unnormalized Gaussian densities to have modes of high-dimensional submanifolds instead of just discrete points. Fitting the ... ...

    Abstract We introduce a new deep generative model useful for uncertainty quantification: the Morse neural network, which generalizes the unnormalized Gaussian densities to have modes of high-dimensional submanifolds instead of just discrete points. Fitting the Morse neural network via a KL-divergence loss yields 1) a (unnormalized) generative density, 2) an OOD detector, 3) a calibration temperature, 4) a generative sampler, along with in the supervised case 5) a distance aware-classifier. The Morse network can be used on top of a pre-trained network to bring distance-aware calibration w.r.t the training data. Because of its versatility, the Morse neural networks unifies many techniques: e.g., the Entropic Out-of-Distribution Detector of (Mac\^edo et al., 2021) in OOD detection, the one class Deep Support Vector Description method of (Ruff et al., 2018) in anomaly detection, or the Contrastive One Class classifier in continuous learning (Sun et al., 2021). The Morse neural network has connections to support vector machines, kernel methods, and Morse theory in topology.

    Comment: Accepted to ICML workshop on Structured Probabilistic Inference & Generative Modeling 2023
    Keywords Statistics - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-07-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Self-Evaluation Improves Selective Generation in Large Language Models

    Ren, Jie / Zhao, Yao / Vu, Tu / Liu, Peter J. / Lakshminarayanan, Balaji

    2023  

    Abstract: Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, ... ...

    Abstract Safe deployment of large language models (LLMs) may benefit from a reliable method for assessing their generated content to determine when to abstain or to selectively generate. While likelihood-based metrics such as perplexity are widely employed, recent research has demonstrated the limitations of using sequence-level probability estimates given by LLMs as reliable indicators of generation quality. Conversely, LLMs have demonstrated strong calibration at the token level, particularly when it comes to choosing correct answers in multiple-choice questions or evaluating true/false statements. In this work, we reformulate open-ended generation tasks into token-level prediction tasks, and leverage LLMs' superior calibration at the token level. We instruct an LLM to self-evaluate its answers, employing either a multi-way comparison or a point-wise evaluation approach, with the option to include a ``None of the above'' option to express the model's uncertainty explicitly. We benchmark a range of scoring methods based on self-evaluation and evaluate their performance in selective generation using TruthfulQA and TL;DR. Through experiments with PaLM-2 and GPT-3, we demonstrate that self-evaluation based scores not only improve accuracy, but also correlate better with the overall quality of generated content.
    Keywords Computer Science - Computation and Language ; Computer Science - Artificial Intelligence ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2023-12-14
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Exploring the Limits of Out-of-Distribution Detection

    Fort, Stanislav / Ren, Jie / Lakshminarayanan, Balaji

    2021  

    Abstract: Near out-of-distribution detection (OOD) is a major challenge for deep neural networks. We demonstrate that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data ... ...

    Abstract Near out-of-distribution detection (OOD) is a major challenge for deep neural networks. We demonstrate that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities. For instance, on CIFAR-100 vs CIFAR-10 OOD detection, we improve the AUROC from 85% (current SOTA) to more than 96% using Vision Transformers pre-trained on ImageNet-21k. On a challenging genomics OOD detection benchmark, we improve the AUROC from 66% to 77% using transformers and unsupervised pre-training. To further improve performance, we explore the few-shot outlier exposure setting where a few examples from outlier classes may be available; we show that pre-trained transformers are particularly well-suited for outlier exposure, and that the AUROC of OOD detection on CIFAR-100 vs CIFAR-10 can be improved to 98.7% with just 1 image per OOD class, and 99.46% with 10 images per OOD class. For multi-modal image-text pre-trained transformers such as CLIP, we explore a new way of using just the names of outlier classes as a sole source of information without any accompanying images, and show that this outperforms previous SOTA on standard vision OOD benchmark tasks.

    Comment: S.F. and J.R. contributed equally
    Keywords Computer Science - Machine Learning
    Subject code 006
    Publishing date 2021-06-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article: Hybrid optimization model for conjunctive use of surface and groundwater resources in water deficit irrigation system.

    Sampathkumar, Karthikeyan Moothampalayam / Ramasamy, Saravanan / Ramasubbu, Balamurugan / Karuppanan, Saravanan / Lakshminarayanan, Balaji

    Water science and technology : a journal of the International Association on Water Pollution Research

    2021  Volume 84, Issue 10-11, Page(s) 3055–3071

    Abstract: The increasing demand for food production with limited available water resources poses a threat to agricultural activities. Conventional optimization algorithms increase the processing stage and perform in the space allocated from user. Therefore, the ... ...

    Abstract The increasing demand for food production with limited available water resources poses a threat to agricultural activities. Conventional optimization algorithms increase the processing stage and perform in the space allocated from user. Therefore, the proposed work was used to design better performance results. The conjunctive allocation of water resources maximizes the net benefit of farmers. In this study, the novel hybrid optimization model developed is the first of its kind and was designed to resolve the sharing of water resource conflict among different reaches based on a genetic algorithm (GA), bacterial foraging optimization (BFO) and ant colony optimization (ACO) to maximize the net benefit of the water deficit in Sathanur reservoir command. The GA-based optimization model considered crop-related physical and economic parameters to derive optimal cropping patterns for three different conjunctive use policies and further allocation of surface and groundwater for different crops are enhanced with the BFO. The allocation of surface and groundwater for the head, middle and tail reaches obtained from the BFO is considered as an input to the ACO as a guiding mechanism to attain an optimal cropping pattern. Comparing the average productivity values, policy 3 (3.665 Rs/m
    MeSH term(s) Agricultural Irrigation ; Agriculture ; Groundwater ; Water ; Water Resources ; Water Supply
    Chemical Substances Water (059QF0KO0R)
    Language English
    Publishing date 2021-12-01
    Publishing country England
    Document type Journal Article
    ZDB-ID 764273-8
    ISSN 1996-9732 ; 0273-1223
    ISSN (online) 1996-9732
    ISSN 0273-1223
    DOI 10.2166/wst.2021.279
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Book ; Online: Bayesian Deep Ensembles via the Neural Tangent Kernel

    He, Bobby / Lakshminarayanan, Balaji / Teh, Yee Whye

    2020  

    Abstract: We explore the link between deep ensembles and Gaussian processes (GPs) through the lens of the Neural Tangent Kernel (NTK): a recent development in understanding the training dynamics of wide neural networks (NNs). Previous work has shown that even in ... ...

    Abstract We explore the link between deep ensembles and Gaussian processes (GPs) through the lens of the Neural Tangent Kernel (NTK): a recent development in understanding the training dynamics of wide neural networks (NNs). Previous work has shown that even in the infinite width limit, when NNs become GPs, there is no GP posterior interpretation to a deep ensemble trained with squared error loss. We introduce a simple modification to standard deep ensembles training, through addition of a computationally-tractable, randomised and untrainable function to each ensemble member, that enables a posterior interpretation in the infinite width limit. When ensembled together, our trained NNs give an approximation to a posterior predictive distribution, and we prove that our Bayesian deep ensembles make more conservative predictions than standard deep ensembles in the infinite width limit. Finally, using finite width NNs we demonstrate that our Bayesian deep ensembles faithfully emulate the analytic posterior predictive when available, and can outperform standard deep ensembles in various out-of-distribution settings, for both regression and classification tasks.
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 519
    Publishing date 2020-07-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Test Sample Accuracy Scales with Training Sample Density in Neural Networks

    Ji, Xu / Pascanu, Razvan / Hjelm, Devon / Lakshminarayanan, Balaji / Vedaldi, Andrea

    2021  

    Abstract: Intuitively, one would expect accuracy of a trained neural network's prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error ... ...

    Abstract Intuitively, one would expect accuracy of a trained neural network's prediction on test samples to correlate with how densely the samples are surrounded by seen training samples in representation space. We find that a bound on empirical training error smoothed across linear activation regions scales inversely with training sample density in representation space. Empirically, we verify this bound is a strong predictor of the inaccuracy of the network's prediction on test samples. For unseen test sets, including those with out-of-distribution samples, ranking test samples by their local region's error bound and discarding samples with the highest bounds raises prediction accuracy by up to 20% in absolute terms for image classification datasets, on average over thresholds.

    Comment: CoLLAs 2022 oral
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Statistics - Machine Learning
    Publishing date 2021-06-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top