LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Saremi, S"
  2. AU="Heller, Regine"
  3. AU="He, Dongyuan"
  4. AU=Beer Arno
  5. AU="Wirth, Steffen"
  6. AU="Renza, Louis A."
  7. AU="Maleckis, Matiss"
  8. AU="Sridhar, Nikitha"
  9. AU=Daley G Q
  10. AU=Kawasaki Hideya
  11. AU="Pape, Terry D"
  12. AU="Mungra, Neelakshi"
  13. AU="Gurgu, Mihai"
  14. AU=Duan Surong
  15. AU="Kasmi, Yassine"
  16. AU="Katori, Machiko"
  17. AU="Richter, Susanna"
  18. AU="Oladipo, Aishat T"
  19. AU="Arango, Alissa"
  20. AU=Manjili Rose H AU=Manjili Rose H
  21. AU=Chen Hongtao
  22. AU="Soto Alsar, Javier"
  23. AU="Eric Woode"
  24. AU="Zybina O"
  25. AU="Reynolds, Cecil R"
  26. AU="Shahidul Khan"
  27. AU="Vasisth, Rashi"
  28. AU="Raju Mandal"
  29. AU="Owen, Noel L"
  30. AU=Liu Xiaolei
  31. AU="Fırıncıoğluları, Ali"
  32. AU="Piepel, Christiane"
  33. AU="Saremi, Saeid"
  34. AU="Dunxian She"

Suchergebnis

Treffer 1 - 10 von insgesamt 212

Suchoptionen

  1. Buch ; Online: Unnormalized Variational Bayes

    Saremi, Saeed

    2020  

    Abstract: We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using ...

    Abstract We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable $Y=X+N(0,\sigma^2 I_d)$ and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of $Y$ which is then used to estimate $X$ with the empirical Bayes least-squares estimator. In this intriguing setup, the $\textit{gradient}$ of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase $\sigma=1$, where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of $Y$ is very smoothed and we demonstrate that one can traverse in a single run $-$ without a restart $-$ all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB $\neq$ VAE.

    Comment: Submitted to Journal of Machine Learning Research
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 670
    Erscheinungsdatum 2020-07-29
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: Learning and Inference in Imaginary Noise Models

    Saremi, Saeed

    2020  

    Abstract: Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable $Y=X+N(0,\sigma^2 I_d)$. A notion of smoothed variational inference emerges ... ...

    Abstract Inspired by recent developments in learning smoothed densities with empirical Bayes, we study variational autoencoders with a decoder that is tailored for the random variable $Y=X+N(0,\sigma^2 I_d)$. A notion of smoothed variational inference emerges where the smoothing is implicitly enforced by the noise model of the decoder; "implicit", since during training the encoder only sees clean samples. This is the concept of imaginary noise model, where the noise model dictates the functional form of the variational lower bound $\mathcal{L}(\sigma)$, but the noisy data are never seen during learning. The model is named $\sigma$-VAE. We prove that all $\sigma$-VAEs are equivalent to each other via a simple $\beta$-VAE expansion: $\mathcal{L}(\sigma_2) \equiv \mathcal{L}(\sigma_1,\beta)$, where $\beta=\sigma_2^2/\sigma_1^2$. We prove a similar result for the Laplace distribution in exponential families. Empirically, we report an intriguing power law $\mathcal{D}_{\rm KL} \sim \sigma^{-\nu}$ for the learned models and we study the inference in the $\sigma$-VAE for unseen noisy data. The experiments were performed on MNIST, where we show that quite remarkably the model can make reasonable inferences on extremely noisy samples even though it has not seen any during training. The vanilla VAE completely breaks down in this regime. We finish with a hypothesis (the XYZ hypothesis) on the findings here.
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2020-05-18
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  3. Buch ; Online: On approximating $\nabla f$ with neural networks

    Saremi, Saeed

    2019  

    Abstract: Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ ...

    Abstract Consider a feedforward neural network $\psi: \mathbb{R}^d\rightarrow \mathbb{R}^d$ such that $\psi\approx \nabla f$, where $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is a smooth function, therefore $\psi$ must satisfy $\partial_j \psi_i = \partial_i \psi_j$ pointwise. We prove a theorem that a $\psi$ network with more than one hidden layer can only represent one feature in its first hidden layer; this is a dramatic departure from the well-known results for one hidden layer. The proof of the theorem is straightforward, where two backward paths and a weight-tying matrix play the key roles. We then present the alternative, the implicit parametrization, where the neural network is $\phi: \mathbb{R}^d \rightarrow \mathbb{R}$ and $\nabla \phi \approx \nabla f$; in addition, a "soft analysis" of $\nabla \phi$ gives a dual perspective on the theorem. Throughout, we come back to recent probabilistic models that are formulated as $\nabla \phi \approx \nabla f$, and conclude with a critique of denoising autoencoders.

    Comment: 10 pages
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 519
    Erscheinungsdatum 2019-10-28
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: Chain of Log-Concave Markov Chains

    Saremi, Saeed / Park, Ji Won / Bach, Francis

    2023  

    Abstract: We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made ...

    Abstract We introduce a theoretical framework for sampling from unnormalized densities based on a smoothing scheme that uses an isotropic Gaussian kernel with a single fixed noise scale. We prove one can decompose sampling from a density (minimal assumptions made on the density) into a sequence of sampling from log-concave conditional densities via accumulation of noisy measurements with equal noise levels. Our construction is unique in that it keeps track of a history of samples, making it non-Markovian as a whole, but it is lightweight algorithmically as the history only shows up in the form of a running empirical mean of samples. Our sampling algorithm generalizes walk-jump sampling (Saremi & Hyv\"arinen, 2019). The "walk" phase becomes a (non-Markovian) chain of (log-concave) Markov chains. The "jump" from the accumulated measurements is obtained by empirical Bayes. We study our sampling algorithm quantitatively using the 2-Wasserstein metric and compare it with various Langevin MCMC algorithms. We also report a remarkable capacity of our algorithm to "tunnel" between modes of a distribution.
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning ; Statistics - Computation
    Erscheinungsdatum 2023-05-30
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Buch ; Online: Multimeasurement Generative Models

    Saremi, Saeed / Srivastava, Rupesh Kumar

    2021  

    Abstract: We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density ... ...

    Abstract We formally map the problem of sampling from an unknown distribution with a density in $\mathbb{R}^d$ to the problem of learning and sampling a smoother density in $\mathbb{R}^{Md}$ obtained by convolution with a fixed factorial kernel: the new density is referred to as M-density and the kernel as multimeasurement noise model (MNM). The M-density in $\mathbb{R}^{Md}$ is smoother than the original density in $\mathbb{R}^d$, easier to learn and sample from, yet for large $M$ the two problems are mathematically equivalent since clean data can be estimated exactly given a multimeasurement noisy observation using the Bayes estimator. To formulate the problem, we derive the Bayes estimator for Poisson and Gaussian MNMs in closed form in terms of the unnormalized M-density. This leads to a simple least-squares objective for learning parametric energy and score functions. We present various parametrization schemes of interest including one in which studying Gaussian M-densities directly leads to multidenoising autoencoders--this is the first theoretical connection made between denoising autoencoders and empirical Bayes in the literature. Samples in $\mathbb{R}^d$ are obtained by walk-jump sampling (Saremi & Hyvarinen, 2019) via underdamped Langevin MCMC (walk) to sample from M-density and the multimeasurement Bayes estimation (jump). We study permutation invariant Gaussian M-densities on MNIST, CIFAR-10, and FFHQ-256 datasets, and demonstrate the effectiveness of this framework for realizing fast-mixing stable Markov chains in high dimensions.

    Comment: Our code is publicly available at https://github.com/nnaisense/mems
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 519
    Erscheinungsdatum 2021-12-17
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  6. Artikel ; Online: Induction of apoptosis and suppression of Ras gene expression in MCF human breast cancer cells.

    Saremi, Sadegh / Kolahi, Maryam / Tabandeh, Mohammad Reza / Hashemitabar, Mahmoud

    Journal of cancer research and therapeutics

    2022  Band 18, Heft 4, Seite(n) 1052–1060

    Abstract: Breast cancer is the leading invasive cancer in women globally. This study aimed at evaluating the anti-apoptotic activity of p-Coumaric acid (PCA) on MCF-7 breast cancer cell line. Experiments were conducted in which the MCF-7 cell line was treated with ...

    Abstract Breast cancer is the leading invasive cancer in women globally. This study aimed at evaluating the anti-apoptotic activity of p-Coumaric acid (PCA) on MCF-7 breast cancer cell line. Experiments were conducted in which the MCF-7 cell line was treated with PCA. which showed decreased cell viability, increased lactate dehydrogenase activity, and caspase-3 activation. The results were evaluated with real-time polymerase chain reaction which revealed that PCA reduced the amount of H-Ras and K-Ras transcript in MCF-7 breast cancer cells. In the presence of PCA there was a significant increase in the levels of mRNA gene Bax and late apoptotic cells which was dose dependent. It also retarded the relative expression of antiapoptotic gene, Bcl2 in treated cells. The results suggest that PCA exhibits anti-cancer properties against MCF-7 cells. PCA inhibited the growth of MCF7 cell. The optimum concentration of PCA was 75-150 mM. PCA can inhibit the growth of MCF-7 cells by reducing Ras expression and inducing cell apoptosis. Our results suggest that PCA could prove valuable in the search for possible inhibitors of Ras oncogene functionality and gain further support for its potential utilization in the treatment of patients with breast cancer. PCA is safe and could complement current treatments employed for the disease.
    Mesh-Begriff(e) Apoptosis/genetics ; Breast Neoplasms/drug therapy ; Breast Neoplasms/genetics ; Breast Neoplasms/metabolism ; Caspase 3/metabolism ; Cell Proliferation/genetics ; Female ; Gene Expression ; Genes, ras ; Humans ; Lactate Dehydrogenases/genetics ; MCF-7 Cells ; RNA, Messenger/metabolism ; bcl-2-Associated X Protein/genetics
    Chemische Substanzen RNA, Messenger ; bcl-2-Associated X Protein ; Lactate Dehydrogenases (EC 1.1.-) ; Caspase 3 (EC 3.4.22.-)
    Sprache Englisch
    Erscheinungsdatum 2022-09-16
    Erscheinungsland India
    Dokumenttyp Journal Article
    ZDB-ID 2187633-2
    ISSN 1998-4138 ; 0973-1482
    ISSN (online) 1998-4138
    ISSN 0973-1482
    DOI 10.4103/jcrt.JCRT_624_20
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  7. Buch ; Online: Provable Robust Classification via Learned Smoothed Densities

    Saremi, Saeed / Srivastava, Rupesh

    2020  

    Abstract: Smoothing classifiers and probability density functions with Gaussian kernels appear unrelated, but in this work, they are unified for the problem of robust classification. The key building block is approximating the $\textit{energy function}$ of the ... ...

    Abstract Smoothing classifiers and probability density functions with Gaussian kernels appear unrelated, but in this work, they are unified for the problem of robust classification. The key building block is approximating the $\textit{energy function}$ of the random variable $Y=X+N(0,\sigma^2 I_d)$ with a neural network which we use to formulate the problem of robust classification in terms of $\widehat{x}(Y)$, the $\textit{Bayes estimator}$ of $X$ given the noisy measurements $Y$. We introduce $\textit{empirical Bayes smoothed classifiers}$ within the framework of $\textit{randomized smoothing}$ and study it theoretically for the two-class linear classifier, where we show one can improve their robustness above $\textit{the margin}$. We test the theory on MNIST and we show that with a learned smoothed energy function and a linear classifier we can achieve provable $\ell_2$ robust accuracies that are competitive with empirical defenses. This setup can be significantly improved by $\textit{learning}$ empirical Bayes smoothed classifiers with adversarial training and on MNIST we show that we can achieve provable robust accuracies higher than the state-of-the-art empirical defenses in a range of radii. We discuss some fundamental challenges of randomized smoothing based on a geometric interpretation due to concentration of Gaussians in high dimensions, and we finish the paper with a proposal for using walk-jump sampling, itself based on learned smoothed densities, for robust classification.

    Comment: 24 pages, 6 figures
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning ; Mathematics - Optimization and Control
    Thema/Rubrik (Code) 006 ; 519
    Erscheinungsdatum 2020-05-09
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  8. Artikel: Assessment of Standard Operating Procedures (SOPs) Preparing Hygienic Condition in the Blood Donation Centers during the Outbreak of COVID-19.

    Mohammadi, Saeed / Tabatabaei Yazdi, Seyed Morteza / Balagholi, Sahar / Saremi, Saeid / Dabbaghi, Rasul / Ferdowsi, Shirin / Eshghi, Peyman

    International journal of hematology-oncology and stem cell research

    2023  Band 17, Heft 3, Seite(n) 167–176

    Abstract: Background: ...

    Abstract Background:
    Sprache Englisch
    Erscheinungsdatum 2023-09-08
    Erscheinungsland Iran
    Dokumenttyp Journal Article
    ZDB-ID 2652853-8
    ISSN 2008-2207 ; 2008-3009
    ISSN (online) 2008-2207
    ISSN 2008-3009
    DOI 10.18502/ijhoscr.v17i3.13306
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  9. Buch ; Online: Automatic design of novel potential 3CL$^{\text{pro}}$ and PL$^{\text{pro}}$ inhibitors

    Atkinson, Timothy / Saremi, Saeed / Gomez, Faustino / Masci, Jonathan

    2021  

    Abstract: With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with ...

    Abstract With the goal of designing novel inhibitors for SARS-CoV-1 and SARS-CoV-2, we propose the general molecule optimization framework, Molecular Neural Assay Search (MONAS), consisting of three components: a property predictor which identifies molecules with specific desirable properties, an energy model which approximates the statistical similarity of a given molecule to known training molecules, and a molecule search method. In this work, these components are instantiated with graph neural networks (GNNs), Deep Energy Estimator Networks (DEEN) and Monte Carlo tree search (MCTS), respectively. This implementation is used to identify 120K molecules (out of 40-million explored) which the GNN determined to be likely SARS-CoV-1 inhibitors, and, at the same time, are statistically close to the dataset used to train the GNN.
    Schlagwörter Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Quantitative Biology - Quantitative Methods
    Thema/Rubrik (Code) 541 ; 006
    Erscheinungsdatum 2021-01-28
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  10. Buch ; Online: Neural Empirical Bayes

    Saremi, Saeed / Hyvarinen, Aapo

    2019  

    Abstract: We formulate a novel framework that unifies kernel density estimation and empirical Bayes, where we address a broad set of problems in unsupervised learning with a geometric interpretation rooted in the concentration of measure phenomenon. We start by ... ...

    Abstract We formulate a novel framework that unifies kernel density estimation and empirical Bayes, where we address a broad set of problems in unsupervised learning with a geometric interpretation rooted in the concentration of measure phenomenon. We start by energy estimation based on a denoising objective which recovers the original/clean data X from its measured/noisy version Y with empirical Bayes least squares estimator. The setup is rooted in kernel density estimation, but the log-pdf in Y is parametrized with a neural network, and crucially, the learning objective is derived for any level of noise/kernel bandwidth. Learning is efficient with double backpropagation and stochastic gradient descent. An elegant physical picture emerges of an interacting system of high-dimensional spheres around each data point, together with a globally-defined probability flow field. The picture is powerful: it leads to a novel sampling algorithm, a new notion of associative memory, and it is instrumental in designing experiments. We start with extreme denoising experiments. Walk-jump sampling is defined by Langevin MCMC walks in Y, along with asynchronous empirical Bayes jumps to X. Robbins associative memory is defined by a deterministic flow to attractors of the learned probability flow field. Finally, we observed the emergence of remarkably rich creative modes in the regime of highly overlapping spheres.

    Comment: 23 pages, 9 figures
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Thema/Rubrik (Code) 005
    Erscheinungsdatum 2019-03-06
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang