LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 212

Search options

  1. Book ; Online: Unnormalized Variational Bayes

    Saremi, Saeed

    2020  

    Abstract: We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using ...

    Abstract We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of $Y$ which is then used to estimate $X$ with the empirical Bayes least-squares estimator. In this intriguing setup, the $\textit{gradient}$ of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase $\sigma=1$, where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of $Y$ is very smoothed and we demonstrate that one can traverse in a single run $-$ without a restart $-$ all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB $\neq$ VAE.

    Comment: Submitted to Journal of Machine Learning Research
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 670
    Publishing date 2020-07-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: Learning and Inference in Imaginary Noise Models

    Saremi, Saeed

    2020  

    Abstract: Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable $Y=X+N(0,\sigma^2 I_d)$. A notion of smoothed variational inference emerges ... ...

    Abstract Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable $Y=X+N(0,\sigma^2 I_d)$. A notion of smoothed variational inference emerges where the smoothing is implicitly enforced by the noise model of the decoder; "implicit", since during training the encoder only sees clean samples. This is the concept of imaginary noise model, where the noise model dictates the functional form of the variational lower bound $\mathcal{L}(\sigma)$, but the noisy data are never seen during learning. The model is named $\sigma$-VAE. We prove that all $\sigma$-VAEs are equivalent to each other via a simple $\beta$-VAE expansion: $\mathcal{L}(\sigma_2) \equiv \mathcal{L}(\sigma_1,\beta)$, where $\beta=\sigma_2^2/\sigma_1^2$. We prove a similar result for the Laplace distribution in exponential families. Empirically, we report an intriguing power law $\mathcal{D}_{\rm KL} \sim \sigma^{-\nu}$ for the learned models and we study the inference in the $\sigma$-VAE for unseen noisy data. The experiments were performed on MNIST, where we show that quite remarkably the model can make reasonable inferences on extremely noisy samples even though it has not seen any during training. The vanilla VAE completely breaks down in this regime. We finish with a hypothesis (the XYZ hypothesis) on the findings here.
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2020-05-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: On approximating $\nabla f$ with neural networks

    Saremi, Saeed

    2019  

    Abstract: Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ ...

    Abstract Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ pointwise. We prove a theorem that a $\psi$ network with more than one hidden layer can only represent one feature in its first hidden layer; this is a dramatic departure from the well-known results for one hidden layer. The proof of the theorem is straightforward, where two backward paths and a weight-tying matrix play the key roles. We then present the alternative, the implicit parametrization, where the neural network is $\phi: \mathbb{R}^d \rightarrow \mathbb{R}$ and $\nabla \phi \approx \nabla f$; in addition, a "soft analysis" of $\nabla \phi$ gives a dual perspective on the theorem. Throughout, we come back to recent probabilistic models that are formulated as $\nabla \phi \approx \nabla f$, and conclude with a critique of denoising autoencoders.

    Comment: 10 pages
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 519
    Publishing date 2019-10-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Chain of Log-Concave Markov Chains

    Saremi, Saeed / Park, Ji Won / Bach, Francis

    2023  

    Abstract: We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made ...

    Abstract We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. Our construction is unique in that it keeps track of a history of samples, making it non-Markovian as a whole, but it is lightweight algorithmically as the history only shows up in the form of a running empirical mean of samples. Our sampling algorithm generalizes walk-jump sampling (Saremi & Hyv\"arinen, 2019). The "walk" phase becomes a (non-Markovian) chain of (log-concave) Markov chains. The "jump" from the accumulated measurements is obtained by empirical Bayes. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. We also report a remarkable capacity of our algorithm to "tunnel" between modes of a distribution.
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning ; Statistics - Computation
    Publishing date 2023-05-30
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article ; Online: Induction of apoptosis and suppression of Ras gene expression in MCF human breast cancer cells.

    Saremi, Sadegh / Kolahi, Maryam / Tabandeh, Mohammad Reza / Hashemitabar, Mahmoud

    Journal of cancer research and therapeutics

    2022  Volume 18, Issue 4, Page(s) 1052–1060

    Abstract: Breast cancer is the leading invasive cancer in women globally. This study aimed at evaluating the anti-apoptotic activity of p-Coumaric acid (PCA) on MCF-7 breast cancer cell line. Experiments were conducted in which the MCF-7 cell line was treated with ...

    Abstract Breast cancer is the leading invasive cancer in women globally. This study aimed at evaluating the anti-apoptotic activity of p-Coumaric acid (PCA) on MCF-7 breast cancer cell line. Experiments were conducted in which the MCF-7 cell line was treated with PCA. which showed decreased cell viability, increased lactate dehydrogenase activity, and caspase-3 activation. The results were evaluated with real-time polymerase chain reaction which revealed that PCA reduced the amount of H-Ras and K-Ras transcript in MCF-7 breast cancer cells. In the presence of PCA there was a significant increase in the levels of mRNA gene Bax and late apoptotic cells which was dose dependent. It also retarded the relative expression of antiapoptotic gene, Bcl2 in treated cells. The results suggest that PCA exhibits anti-cancer properties against MCF-7 cells. PCA inhibited the growth of MCF7 cell. The optimum concentration of PCA was 75-150 mM. PCA can inhibit the growth of MCF-7 cells by reducing Ras expression and inducing cell apoptosis. Our results suggest that PCA could prove valuable in the search for possible inhibitors of Ras oncogene functionality and gain further support for its potential utilization in the treatment of patients with breast cancer. PCA is safe and could complement current treatments employed for the disease.
    MeSH term(s) Apoptosis/genetics ; Breast Neoplasms/drug therapy ; Breast Neoplasms/genetics ; Breast Neoplasms/metabolism ; Caspase 3/metabolism ; Cell Proliferation/genetics ; Female ; Gene Expression ; Genes, ras ; Humans ; Lactate Dehydrogenases/genetics ; MCF-7 Cells ; RNA, Messenger/metabolism ; bcl-2-Associated X Protein/genetics
    Chemical Substances RNA, Messenger ; bcl-2-Associated X Protein ; Lactate Dehydrogenases (EC 1.1.-) ; Caspase 3 (EC 3.4.22.-)
    Language English
    Publishing date 2022-09-16
    Publishing country India
    Document type Journal Article
    ZDB-ID 2187633-2
    ISSN 1998-4138 ; 0973-1482
    ISSN (online) 1998-4138
    ISSN 0973-1482
    DOI 10.4103/jcrt.JCRT_624_20
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Multimeasurement Generative Models

    Saremi, Saeed / Srivastava, Rupesh Kumar

    2021  

    Abstract: We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density ... ...

    Abstract We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density is referred to as M-density and the kernel as multimeasurement noise model (MNM). The M-density in $\mathbb{R}^{Md}$ is smoother than the original density in $\mathbb{R}^d$, easier to learn and sample from, yet for large $M$ the two problems are mathematically equivalent since clean data can be estimated exactly given a multimeasurement noisy observation using the Bayes estimator. To formulate the problem, we derive the Bayes estimator for Poisson and Gaussian MNMs in closed form in terms of the unnormalized M-density. This leads to a simple least-squares objective for learning parametric energy and score functions. We present various parametrization schemes of interest including one in which studying Gaussian M-densities directly leads to multidenoising autoencoders--this is the first theoretical connection made between denoising autoencoders and empirical Bayes in the literature. Samples in $\mathbb{R}^d$ are obtained by walk-jump sampling (Saremi & Hyvarinen, 2019) via underdamped Langevin MCMC (walk) to sample from M-density and the multimeasurement Bayes estimation (jump). We study permutation invariant Gaussian M-densities on MNIST, CIFAR-10, and FFHQ-256 datasets, and demonstrate the effectiveness of this framework for realizing fast-mixing stable Markov chains in high dimensions.

    Comment: Our code is publicly available at https://github.com/nnaisense/mems
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning
    Subject code 519
    Publishing date 2021-12-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Provable Robust Classification via Learned Smoothed Densities

    Saremi, Saeed / Srivastava, Rupesh

    2020  

    Abstract: Smoothing classifiers and probability density functions with Gaussian kernels appear unrelated, but in this work, they are unified for the problem of robust classification. The key building block is approximating the $\textit{energy function}$ of the ... ...

    Abstract Smoothing classifiers and probability density functions with Gaussian kernels appear unrelated, but in this work, they are unified for the problem of robust classification. The key building block is approximating the $\textit{energy function}$ of the random variable $Y=X+N(0,\sigma^2 I_d)$ with a neural network which we use to formulate the problem of robust classification in terms of $\widehat{x}(Y)$, the $\textit{Bayes estimator}$ of $X$ given the noisy measurements $Y$. We introduce $\textit{empirical Bayes smoothed classifiers}$ within the framework of $\textit{randomized smoothing}$ and study it theoretically for the two-class linear classifier, where we show one can improve their robustness above $\textit{the margin}$. We test the theory on MNIST and we show that with a learned smoothed energy function and a linear classifier we can achieve provable $\ell_2$ robust accuracies that are competitive with empirical defenses. This setup can be significantly improved by $\textit{learning}$ empirical Bayes smoothed classifiers with adversarial training and on MNIST we show that we can achieve provable robust accuracies higher than the state-of-the-art empirical defenses in a range of radii. We discuss some fundamental challenges of randomized smoothing based on a geometric interpretation due to concentration of Gaussians in high dimensions, and we finish the paper with a proposal for using walk-jump sampling, itself based on learned smoothed densities, for robust classification.

    Comment: 24 pages, 6 figures
    Keywords Statistics - Machine Learning ; Computer Science - Machine Learning ; Mathematics - Optimization and Control
    Subject code 006 ; 519
    Publishing date 2020-05-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article: Assessment of Standard Operating Procedures (SOPs) Preparing Hygienic Condition in the Blood Donation Centers during the Outbreak of COVID-19.

    Mohammadi, Saeed / Tabatabaei Yazdi, Seyed Morteza / Balagholi, Sahar / Saremi, Saeid / Dabbaghi, Rasul / Ferdowsi, Shirin / Eshghi, Peyman

    International journal of hematology-oncology and stem cell research

    2023  Volume 17, Issue 3, Page(s) 167–176

    Abstract: Background: ...

    Abstract Background:
    Language English
    Publishing date 2023-09-08
    Publishing country Iran
    Document type Journal Article
    ZDB-ID 2652853-8
    ISSN 2008-2207 ; 2008-3009
    ISSN (online) 2008-2207
    ISSN 2008-3009
    DOI 10.18502/ijhoscr.v17i3.13306
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article: Hydrophilic Modification of Dialysis Membranes Sustains Middle Molecule Removal and Filtration Characteristics.

    Zawada, Adam M / Emal, Karlee / Förster, Eva / Saremi, Saeedeh / Delinski, Dirk / Theis, Lukas / Küng, Florian / Xie, Wenhao / Werner, Joanie / Stauss-Grabo, Manuela / Faust, Matthias / Boyington, Skyler / Kennedy, James P

    Membranes

    2024  Volume 14, Issue 4

    Abstract: While efficient removal of uremic toxins and accumulated water is pivotal for the well-being of dialysis patients, protein adsorption to the dialyzer membrane reduces the performance of a dialyzer. Hydrophilic membrane modification with ... ...

    Abstract While efficient removal of uremic toxins and accumulated water is pivotal for the well-being of dialysis patients, protein adsorption to the dialyzer membrane reduces the performance of a dialyzer. Hydrophilic membrane modification with polyvinylpyrrolidone (PVP) has been shown to reduce protein adsorption and to stabilize membrane permeability. In this study we compared middle molecule clearance and filtration performance of nine polysulfone-, polyethersulfone-, and cellulose-based dialyzers over time. Protein adsorption was simulated in recirculation experiments, while β2-microglobulin clearance as well as transmembrane pressure (TMP) and filtrate flow were determined over time. The results of this study showed that β2-microglobulin clearance (-7.2 mL/min/m
    Language English
    Publishing date 2024-04-03
    Publishing country Switzerland
    Document type Journal Article
    ZDB-ID 2614641-1
    ISSN 2077-0375
    ISSN 2077-0375
    DOI 10.3390/membranes14040083
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Book ; Online: Automatic design of novel potential 3CL$^{\text{pro}}$ and PL$^{\text{pro}}$ inhibitors

    Atkinson, Timothy / Saremi, Saeed / Gomez, Faustino / Masci, Jonathan

    2021  

    Abstract: With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with ...

    Abstract With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with specific desirable properties, an energy model which approximates the statistical similarity of a given molecule to known training molecules, and a molecule search method. In this work, these components are instantiated with graph neural networks (GNNs), Deep Energy Estimator Networks (DEEN) and Monte Carlo tree search (MCTS), respectively. This implementation is used to identify 120K molecules (out of 40-million explored) which the GNN determined to be likely SARS-CoV-1 inhibitors, and, at the same time, are statistically close to the dataset used to train the GNN.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Quantitative Biology - Quantitative Methods
    Subject code 541 ; 006
    Publishing date 2021-01-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top