LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 27

Search options

  1. Article ; Online: Abdominal Aorta Bullet Embolism: Presentation and Management.

    Tran, Dustin / Jones, Derek / Moore, Ronald / Guarneri, Ralph

    Annals of vascular surgery

    2021  Volume 74, Page(s) 524.e17–524.e21

    Abstract: Penetrating cardiac injury (PCI) secondary to a gunshot wound presents with several complications and a mortality rate of 94% before reaching the hospital. Current literature search reveals that embolism of bullet fragments after gunshot wounds are ... ...

    Abstract Penetrating cardiac injury (PCI) secondary to a gunshot wound presents with several complications and a mortality rate of 94% before reaching the hospital. Current literature search reveals that embolism of bullet fragments after gunshot wounds are exceptionally rare. Additionally, no reported case was found regarding bullet embolism following left atrium penetration at the time of this case report. This case describes a 34-year-old male who presented to a level I trauma center after sustaining a gunshot wound to the posterior torso. Imaging demonstrated several fragments in the mid-thoracic region in addition to the primary bullet fragment located within the aorta at the bifurcation of the iliac arteries likely due to left atrial penetration and subsequent embolization.
    MeSH term(s) Adult ; Aorta, Abdominal/diagnostic imaging ; Aorta, Abdominal/surgery ; Aortography ; Computed Tomography Angiography ; Foreign-Body Migration/diagnostic imaging ; Foreign-Body Migration/etiology ; Foreign-Body Migration/surgery ; Heart Injuries/diagnostic imaging ; Heart Injuries/etiology ; Heart Injuries/surgery ; Humans ; Male ; Treatment Outcome ; Wounds, Gunshot/complications
    Language English
    Publishing date 2021-04-06
    Publishing country Netherlands
    Document type Case Reports
    ZDB-ID 1027366-9
    ISSN 1615-5947 ; 0890-5096
    ISSN (online) 1615-5947
    ISSN 0890-5096
    DOI 10.1016/j.avsg.2021.02.028
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Sampling the Variational Posterior with Local Refinement.

    Havasi, Marton / Snoek, Jasper / Tran, Dustin / Gordon, Jonathan / Hernández-Lobato, José Miguel

    Entropy (Basel, Switzerland)

    2021  Volume 23, Issue 11

    Abstract: Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is ... ...

    Abstract Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).
    Language English
    Publishing date 2021-11-08
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2014734-X
    ISSN 1099-4300 ; 1099-4300
    ISSN (online) 1099-4300
    ISSN 1099-4300
    DOI 10.3390/e23111475
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Hyperparameter Ensembles for Robustness and Uncertainty Quantification

    Wenzel, Florian / Snoek, Jasper / Tran, Dustin / Jenatton, Rodolphe

    2020  

    Abstract: Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter ... ...

    Abstract Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.

    Comment: Accepted at NeurIPS 2020
    Keywords Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2020-06-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models

    Allingham, James Urquhart / Ren, Jie / Dusenberry, Michael W / Gu, Xiuye / Cui, Yin / Tran, Dustin / Liu, Jeremiah Zhe / Lakshminarayanan, Balaji

    2023  

    Abstract: Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot ... ...

    Abstract Contrastively trained text-image models have the remarkable ability to perform zero-shot classification, that is, classifying previously unseen images into categories that the model has never been explicitly trained to identify. However, these zero-shot classifiers need prompt engineering to achieve high accuracy. Prompt engineering typically requires hand-crafting a set of prompts for individual downstream tasks. In this work, we aim to automate this prompt engineering and improve zero-shot accuracy through prompt ensembling. In particular, we ask "Given a large pool of prompts, can we automatically score the prompts and ensemble those that are most suitable for a particular downstream dataset, without needing access to labeled validation data?". We demonstrate that this is possible. In doing so, we identify several pathologies in a naive prompt scoring method where the score can be easily overconfident due to biases in pre-training and test data, and we propose a novel prompt scoring method that corrects for the biases. Using our proposed scoring method to create a weighted average prompt ensemble, our method outperforms equal average ensemble, as well as hand-crafted prompts, on ImageNet, 4 of its variants, and 11 fine-grained classification benchmarks, all while being fully automatic, optimization-free, and not requiring access to labeled validation data.

    Comment: Accepted at ICML 2023. 23 pages, 10 tables, 3 figures
    Keywords Computer Science - Machine Learning ; Computer Science - Computer Vision and Pattern Recognition ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2023-02-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Larger language models do in-context learning differently

    Wei, Jerry / Wei, Jason / Tay, Yi / Tran, Dustin / Webson, Albert / Lu, Yifeng / Chen, Xinyun / Liu, Hanxiao / Huang, Da / Zhou, Denny / Ma, Tengyu

    2023  

    Abstract: We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, ... ...

    Abstract We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.
    Keywords Computer Science - Computation and Language
    Publishing date 2023-03-07
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness

    Liu, Jeremiah Zhe / Lin, Zi / Padhy, Shreyas / Tran, Dustin / Bedrax-Weiss, Tania / Lakshminarayanan, Balaji

    2020  

    Abstract: Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model. However their practicality in real-time, industrial-scale applications are limited due to their heavy memory and ... ...

    Abstract Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model. However their practicality in real-time, industrial-scale applications are limited due to their heavy memory and inference cost. This motivates us to study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN). By formalizing the uncertainty quantification as a minimax learning problem, we first identify input distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data in the input space, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs, by adding a weight normalization step during training and replacing the output layer with a Gaussian process. On a suite of vision and language understanding tasks and on modern architectures (Wide-ResNet and BERT), SNGP is competitive with deep ensembles in prediction, calibration and out-of-domain detection, and outperforms the other single-model approaches.
    Keywords Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2020-06-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Noise Contrastive Priors for Functional Uncertainty

    Hafner, Danijar / Tran, Dustin / Lillicrap, Timothy / Irpan, Alex / Davidson, James

    2018  

    Abstract: Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify their prior. In particular, the common practice of an ... ...

    Abstract Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify their prior. In particular, the common practice of an independent normal prior in weight space imposes relatively weak constraints on the function posterior, allowing it to generalize in unforeseen ways on inputs outside of the training distribution. We propose noise contrastive priors (NCPs) to obtain reliable uncertainty estimates. The key idea is to train the model to output high uncertainty for data points outside of the training distribution. NCPs do so using an input prior, which adds noise to the inputs of the current mini batch, and an output prior, which is a wide distribution given these inputs. NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs prevent overfitting outside of the training distribution and result in uncertainty estimates that are useful for active learning. We demonstrate the scalability of our method on the flight delays data set, where we significantly improve upon previously published results.

    Comment: 12 pages, 6 figures
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2018-07-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Deep Classifiers with Label Noise Modeling and Distance Awareness

    Fortuin, Vincent / Collier, Mark / Wenzel, Florian / Allingham, James / Liu, Jeremiah / Tran, Dustin / Lakshminarayanan, Balaji / Berent, Jesse / Jenatton, Rodolphe / Kokiopoulou, Effrosyni

    2021  

    Abstract: Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications. While there have been many proposed methods that either focus on distance-aware model ... ...

    Abstract Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications. While there have been many proposed methods that either focus on distance-aware model uncertainties for out-of-distribution detection or on input-dependent label uncertainties for in-distribution calibration, both of these types of uncertainty are often necessary. In this work, we propose the HetSNGP method for jointly modeling the model and data uncertainty. We show that our proposed model affords a favorable combination between these two types of uncertainty and thus outperforms the baseline methods on some challenging out-of-distribution datasets, including CIFAR-100C, ImageNet-C, and ImageNet-A. Moreover, we propose HetSNGP Ensemble, an ensembled version of our method which additionally models uncertainty over the network parameters and outperforms other ensemble baselines.

    Comment: Published in TMLR
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2021-10-06
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: Soft Calibration Objectives for Neural Networks

    Karandikar, Archit / Cain, Nicholas / Tran, Dustin / Lakshminarayanan, Balaji / Shlens, Jonathon / Mozer, Michael C. / Roelofs, Becca

    2021  

    Abstract: Optimal decision making requires that classifiers produce uncertainty estimates consistent with their empirical accuracy. However, deep neural networks are often under- or over-confident in their predictions. Consequently, methods have been developed to ... ...

    Abstract Optimal decision making requires that classifiers produce uncertainty estimates consistent with their empirical accuracy. However, deep neural networks are often under- or over-confident in their predictions. Consequently, methods have been developed to improve the calibration of their predictive uncertainty both during training and post-hoc. In this work, we propose differentiable losses to improve calibration based on a soft (continuous) version of the binning operation underlying popular calibration-error estimators. When incorporated into training, these soft calibration losses achieve state-of-the-art single-model ECE across multiple datasets with less than 1% decrease in accuracy. For instance, we observe an 82% reduction in ECE (70% relative to the post-hoc rescaled ECE) in exchange for a 0.7% relative decrease in accuracy relative to the cross entropy baseline on CIFAR-100. When incorporated post-training, the soft-binning-based calibration error objective improves upon temperature scaling, a popular recalibration method. Overall, experiments across losses and datasets demonstrate that using calibration-sensitive procedures yield better uncertainty estimates under dataset shift than the standard practice of using a cross entropy loss and post-hoc recalibration methods.

    Comment: 17 pages total, 10 page main paper, 5 page appendix, 10 figures total, 8 figures in main paper, 2 figures in appendix
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2021-07-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Revisiting the Calibration of Modern Neural Networks

    Minderer, Matthias / Djolonga, Josip / Romijnders, Rob / Hubis, Frances / Zhai, Xiaohua / Houlsby, Neil / Tran, Dustin / Lucic, Mario

    2021  

    Abstract: Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models ...

    Abstract Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.

    Comment: 35th Conference on Neural Information Processing Systems (NeurIPS 2021)
    Keywords Computer Science - Machine Learning ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 612
    Publishing date 2021-06-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top