LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 26

Search options

  1. Article: The likelihood of mixed hitting times

    Abbring, Jaap H. / Salimans, Tim

    Journal of econometrics. 2021 Aug., v. 223, no. 2

    2021  

    Abstract: We present a method for computing the likelihood of a mixed hitting-time model that specifies durations as the first time a latent Lévy process crosses a heterogeneous threshold. This likelihood is not generally known in closed form, but its Laplace ... ...

    Abstract We present a method for computing the likelihood of a mixed hitting-time model that specifies durations as the first time a latent Lévy process crosses a heterogeneous threshold. This likelihood is not generally known in closed form, but its Laplace transform is. Our approach to its computation relies on numerical methods for inverting Laplace transforms that exploit special properties of the first passage times of Lévy processes. We use our method to implement a maximum likelihood estimator of the mixed hitting-time model in MATLAB. We illustrate the application of this estimator with an analysis of Kennan’s (1985) strike data.
    Keywords crossing ; duration ; econometrics ; journals ; methodology ; processing time ; statistical analysis ; statistical models
    Language English
    Dates of publication 2021-08
    Size p. 361-375.
    Publishing place Elsevier B.V.
    Document type Article
    ZDB-ID 1460617-3
    ISSN 0304-4076
    ISSN 0304-4076
    DOI 10.1016/j.jeconom.2019.08.017
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

  2. Book ; Online: Classifier-Free Diffusion Guidance

    Ho, Jonathan / Salimans, Tim

    2022  

    Abstract: Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier ...

    Abstract Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier guidance combines the score estimate of a diffusion model with the gradient of an image classifier and thereby requires training an image classifier separate from the diffusion model. It also raises the question of whether guidance can be performed without a classifier. We show that guidance can be indeed performed by a pure generative model without such a classifier: in what we call classifier-free guidance, we jointly train a conditional and an unconditional diffusion model, and we combine the resulting conditional and unconditional score estimates to attain a trade-off between sample quality and diversity similar to that obtained using classifier guidance.

    Comment: A short version of this paper appeared in the NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications: https://openreview.net/pdf?id=qw8AKxfYbI
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence
    Subject code 006
    Publishing date 2022-07-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Progressive Distillation for Fast Sampling of Diffusion Models

    Salimans, Tim / Ho, Jonathan

    2022  

    Abstract: Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes ... ...

    Abstract Diffusion models have recently shown great promise for generative modeling, outperforming GANs on perceptual quality and autoregressive models at density estimation. A remaining downside is their slow sampling time: generating high quality samples takes many hundreds or thousands of model evaluations. Here we make two contributions to help eliminate this downside: First, we present new parameterizations of diffusion models that provide increased stability when using few sampling steps. Second, we present a method to distill a trained deterministic diffusion sampler, using many steps, into a new diffusion model that takes half as many sampling steps. We then keep progressively applying this distillation procedure to our model, halving the number of required sampling steps each time. On standard image generation benchmarks like CIFAR-10, ImageNet, and LSUN, we start out with state-of-the-art samplers taking as many as 8192 steps, and are able to distill down to models taking as few as 4 steps without losing much perceptual quality; achieving, for example, a FID of 3.0 on CIFAR-10 in 4 steps. Finally, we show that the full progressive distillation procedure does not take more time than it takes to train the original model, thus representing an efficient solution for generative modeling using diffusion at both train and test time.

    Comment: Published as a conference paper at ICLR 2022
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Statistics - Machine Learning
    Publishing date 2022-02-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Simple diffusion

    Hoogeboom, Emiel / Heek, Jonathan / Salimans, Tim

    End-to-end diffusion for high resolution images

    2023  

    Abstract: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to ...

    Abstract Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning ; Statistics - Machine Learning
    Subject code 004
    Publishing date 2023-01-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Image Super-Resolution via Iterative Refinement.

    Saharia, Chitwan / Ho, Jonathan / Chan, William / Salimans, Tim / Fleet, David J / Norouzi, Mohammad

    IEEE transactions on pattern analysis and machine intelligence

    2023  Volume 45, Issue 4, Page(s) 4713–4726

    Abstract: We present SR3, an approach to image Super-Resolution via Repeated Refinement. SR3 adapts denoising diffusion probabilistic models (Ho et al. 2020), (Sohl-Dickstein et al. 2015) to image-to-image translation, and performs super-resolution through a ... ...

    Abstract We present SR3, an approach to image Super-Resolution via Repeated Refinement. SR3 adapts denoising diffusion probabilistic models (Ho et al. 2020), (Sohl-Dickstein et al. 2015) to image-to-image translation, and performs super-resolution through a stochastic iterative denoising process. Output images are initialized with pure Gaussian noise and iteratively refined using a U-Net architecture that is trained on denoising at various noise levels, conditioned on a low-resolution input image. SR3 exhibits strong performance on super-resolution tasks at different magnification factors, on faces and natural images. We conduct human evaluation on a standard 8× face super-resolution task on CelebA-HQ for which SR3 achieves a fool rate close to 50%, suggesting photo-realistic outputs, while GAN baselines do not exceed a fool rate of 34%. We evaluate SR3 on a 4× super-resolution task on ImageNet, where SR3 outperforms baselines in human evaluation and classification accuracy of a ResNet-50 classifier trained on high-resolution images. We further show the effectiveness of SR3 in cascaded image generation, where a generative model is chained with super-resolution models to synthesize high-resolution images with competitive FID scores on the class-conditional 256×256 ImageNet generation challenge.
    Language English
    Publishing date 2023-03-07
    Publishing country United States
    Document type Journal Article
    ISSN 1939-3539
    ISSN (online) 1939-3539
    DOI 10.1109/TPAMI.2022.3204461
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Lossy Compression with Gaussian Diffusion

    Theis, Lucas / Salimans, Tim / Hoffman, Matthew D. / Mentzer, Fabian

    2022  

    Abstract: We consider a novel lossy compression approach based on unconditional diffusion generative models, which we call DiffC. Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC ... ...

    Abstract We consider a novel lossy compression approach based on unconditional diffusion generative models, which we call DiffC. Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC relies on the efficient communication of pixels corrupted by Gaussian noise. We implement a proof of concept and find that it works surprisingly well despite the lack of an encoder transform, outperforming the state-of-the-art generative compression method HiFiC on ImageNet 64x64. DiffC only uses a single model to encode and denoise corrupted pixels at arbitrary bitrates. The approach further provides support for progressive coding, that is, decoding from partial bit streams. We perform a rate-distortion analysis to gain a deeper understanding of its performance, providing analytical results for multivariate Gaussian data as well as theoretic bounds for general distributions. Furthermore, we prove that a flow-based reconstruction achieves a 3 dB gain over ancestral sampling at high bitrates.
    Keywords Statistics - Machine Learning ; Computer Science - Information Theory ; Computer Science - Machine Learning
    Subject code 003
    Publishing date 2022-06-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Thesis: Essays in likelihood-based computational econometrics

    Salimans, Tim

    Over de rekenkundige aspecten van de op aannemelijkheid gebaseerde econometric

    (Research series / Universiteit van Amsterdam ; Tinbergen Institute research series ; 562)

    2013  

    Author's details door Tim Salimans
    Series title Research series / Universiteit van Amsterdam
    Tinbergen Institute research series ; 562
    Keywords Wahrscheinlichkeitsrechnung ; Maximum-Likelihood-Schätzung ; Zustandsraummodell ; Stochastischer Prozess ; Regressionsanalyse ; Modellierung ; Theorie
    Language English
    Size VIII, 196 S., graph. Darst.
    Publisher Thela Thesis
    Publishing place Amsterdam
    Document type Book ; Thesis
    Thesis / German Habilitation thesis Zugl.: Rotterdam, Erasmus Univ., Diss., 2013
    Note Sprache der Zusammenfassung: Niederländisch ; Zsfassung in niederl. Sprache
    ISBN 9789036103589 ; 9036103584
    Database ECONomics Information System

    More links

    Kategorien

  8. Article: Variable selection and functional form uncertainty in cross-country growth regressions

    Salimans, Tim

    Journal of econometrics. 2012 Dec., v. 171, no. 2

    2012  

    Abstract: Regression analyses of cross-country economic growth data are complicated by two main forms of model uncertainty: the uncertainty in selecting explanatory variables and the uncertainty in specifying the functional form of the regression function. Most ... ...

    Abstract Regression analyses of cross-country economic growth data are complicated by two main forms of model uncertainty: the uncertainty in selecting explanatory variables and the uncertainty in specifying the functional form of the regression function. Most discussions in the literature address these problems independently, yet a joint treatment is essential. We present a new framework that makes such a joint treatment possible, using flexible nonlinear models specified by Gaussian process priors and addressing the variable selection problem by means of Bayesian model averaging. Using this framework, we extend the linear model to allow for parameter heterogeneity of the type suggested by new growth theory, while taking into account the uncertainty in selecting explanatory variables. Controlling for variable selection uncertainty, we confirm the evidence in favor of parameter heterogeneity presented in several earlier studies. However, controlling for functional form uncertainty, we find that the effects of many of the explanatory variables identified in the literature are not robust across countries and variable selections.
    Keywords econometrics ; economic development ; linear models ; model uncertainty ; nonlinear models ; regression analysis
    Language English
    Dates of publication 2012-12
    Size p. 267-280.
    Publishing place Elsevier B.V.
    Document type Article
    ZDB-ID 1460617-3
    ISSN 0304-4076
    ISSN 0304-4076
    DOI 10.1016/j.jeconom.2012.06.007
    Database NAL-Catalogue (AGRICOLA)

    More links

    Kategorien

  9. Book: Variable selection and functional form uncertainty in cross-country growth regressions

    Salimans, Tim

    (Discussion paper / Tinbergen Institute : 4, Econometrics ; 2011,012)

    2011  

    Author's details Tim Salimans
    Series title Discussion paper / Tinbergen Institute : 4, Econometrics ; 2011,012
    Keywords Wirtschaftswachstum ; Regression ; Modellierung ; Nichtparametrisches Verfahren ; Bayes-Statistik ; Zeitreihenanalyse ; Theorie
    Language English
    Size 20 S., graph. Darst.
    Publishing place Amsterdam u.a.
    Document type Book
    Database ECONomics Information System

    More links

    Kategorien

  10. Book ; Online: Milking CowMask for Semi-Supervised Image Classification

    French, Geoff / Oliver, Avital / Salimans, Tim

    2020  

    Abstract: Consistency regularization is a technique for semi-supervised learning that underlies a number of strong results for classification with few labeled data. It works by encouraging a learned model to be robust to perturbations on unlabeled data. Here, we ... ...

    Abstract Consistency regularization is a technique for semi-supervised learning that underlies a number of strong results for classification with few labeled data. It works by encouraging a learned model to be robust to perturbations on unlabeled data. Here, we present a novel mask-based augmentation method called CowMask. Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8.76% and top-1 error of 26.06%. Moreover, we do so with a method that is much simpler than many alternatives. We further investigate the behavior of CowMask for semi-supervised learning by running many smaller scale experiments on the SVHN, CIFAR-10 and CIFAR-100 data sets, where we achieve results competitive with the state of the art, indicating that CowMask is widely applicable. We open source our code at https://github.com/google-research/google-research/tree/master/milking_cowmask

    Comment: 11 pages, 2 figures, submitted to NeurIPS 2020
    Keywords Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2020-03-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top