LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Your last searches

  1. AU="Weinstein, Omri"
  2. AU="Mecheik, Ali"
  3. AU="Lavitola, L"
  4. AU="Choi, Yunsuk"
  5. AU="Grosswendt, Bernd"
  6. AU="Geddes, Maiya"
  7. AU="Pyae Phyo Kyaw"
  8. AU="Nir Friedman"
  9. AU="Le Jeune, Sylvain"
  10. AU="Phillips, Catherine L"
  11. AU="Galindo-Riera, Natalia"
  12. AU="Daniel, Roy Thomas"
  13. AU="Hesselink, Matthijs K C"
  14. AU=Kwong Kenneth K
  15. AU="Quintero, Luis"
  16. AU=Johnson Benjamin W.

Search results

Result 1 - 10 of total 17

Search options

  1. Book ; Online: Infinite Lewis Weights in Spectral Graph Theory

    Suliman, Amit / Weinstein, Omri

    2023  

    Abstract: We study the spectral implications of re-weighting a graph by the $\ell_\infty$-Lewis weights of its edges. Our main motivation is the ER-Minimization problem (Saberi et al., SIAM'08): Given an undirected graph $G$, the goal is to find positive ... ...

    Abstract We study the spectral implications of re-weighting a graph by the $\ell_\infty$-Lewis weights of its edges. Our main motivation is the ER-Minimization problem (Saberi et al., SIAM'08): Given an undirected graph $G$, the goal is to find positive normalized edge-weights $w\in \mathbb{R}_+^m$ which minimize the sum of pairwise \emph{effective-resistances} of $G_w$ (Kirchhoff's index). By contrast, $\ell_\infty$-Lewis weights minimize the \emph{maximum} effective-resistance of \emph{edges}, but are much cheaper to approximate, especially for Laplacians. With this algorithmic motivation, we study the ER-approximation ratio obtained by Lewis weights. Our first main result is that $\ell_\infty$-Lewis weights provide a constant ($\approx 3.12$) approximation for ER-minimization on \emph{trees}. The proof introduces a new technique, a local polarization process for effective-resistances ($\ell_2$-congestion) on trees, which is of independent interest in electrical network analysis. For general graphs, we prove an upper bound $\alpha(G)$ on the approximation ratio obtained by Lewis weights, which is always $\leq \min\{ \text{diam}(G), \kappa(L_{w_\infty})\}$, where $\kappa$ is the condition number of the weighted Laplacian. All our approximation algorithms run in \emph{input-sparsity} time $\tilde{O}(m)$, a major improvement over Saberi et al.'s $O(m^{3.5})$ SDP for exact ER-minimization. Finally, we demonstrate the favorable effects of $\ell_\infty$-LW reweighting on the \emph{spectral-gap} of graphs and on their \emph{spectral-thinness} (Anari and Gharan, 2015). En-route to our results, we prove a weighted analogue of Mohar's classical bound on $\lambda_2(G)$, and provide a new characterization of leverage-scores of a matrix, as the gradient (w.r.t weights) of the volume of the enclosing ellipsoid.
    Keywords Computer Science - Data Structures and Algorithms ; F.2
    Subject code 511
    Publishing date 2023-02-12
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: Quartic Samples Suffice for Fourier Interpolation

    Song, Zhao / Sun, Baocheng / Weinstein, Omri / Zhang, Ruizhe

    2022  

    Abstract: We study the problem of interpolating a noisy Fourier-sparse signal in the time duration $[0, T]$ from noisy samples in the same range, where the ground truth signal can be any $k$-Fourier-sparse signal with band-limit $[-F, F]$. Our main result is an ... ...

    Abstract We study the problem of interpolating a noisy Fourier-sparse signal in the time duration $[0, T]$ from noisy samples in the same range, where the ground truth signal can be any $k$-Fourier-sparse signal with band-limit $[-F, F]$. Our main result is an efficient Fourier Interpolation algorithm that improves the previous best algorithm by [Chen, Kane, Price, and Song, FOCS 2016] in the following three aspects: $\bullet$ The sample complexity is improved from $\widetilde{O}(k^{51})$ to $\widetilde{O}(k^{4})$. $\bullet$ The time complexity is improved from $ \widetilde{O}(k^{10\omega+40})$ to $\widetilde{O}(k^{4 \omega})$. $\bullet$ The output sparsity is improved from $\widetilde{O}(k^{10})$ to $\widetilde{O}(k^{4})$. Here, $\omega$ denotes the exponent of fast matrix multiplication. The state-of-the-art sample complexity of this problem is $\sim k^4$, but was only known to be achieved by an *exponential-time* algorithm. Our algorithm uses the same number of samples but has a polynomial runtime, laying the groundwork for an efficient Fourier Interpolation algorithm. The centerpiece of our algorithm is a new sufficient condition for the frequency estimation task -- a high signal-to-noise (SNR) band condition -- which allows for efficient and accurate signal reconstruction. Based on this condition together with a new structural decomposition of Fourier signals (Signal Equivalent Method), we design a cheap algorithm for estimating each "significant" frequency within a narrow range, which is then combined with a signal estimation algorithm into a new Fourier Interpolation framework to reconstruct the ground-truth signal.
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 518
    Publishing date 2022-10-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: The Complexity of Dynamic Least-Squares Regression

    Jiang, Shunhua / Peng, Binghui / Weinstein, Omri

    2022  

    Abstract: We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\ ... ...

    Abstract We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$. We prove sharp separations ($d^{2-o(1)}$ vs. $\sim d$) between the amortized update time of: (i) Fully vs. Partially dynamic $0.01$-LSR; (ii) High vs. low-accuracy LSR in the partially-dynamic (insertion-only) setting. Our lower bounds follow from a gap-amplification reduction -- reminiscent of iterative refinement -- rom the exact version of the Online Matrix Vector Conjecture (OMv) [HKNS15], to constant approximate OMv over the reals, where the $i$-th online product $\mathbf{H}\mathbf{v}^{(i)}$ only needs to be computed to $0.1$-relative error. All previous fine-grained reductions from OMv to its approximate versions only show hardness for inverse polynomial approximation $\epsilon = n^{-\omega(1)}$ (additive or multiplicative) . This result is of independent interest in fine-grained complexity and for the investigation of the OMv Conjecture, which is still widely open.
    Keywords Computer Science - Data Structures and Algorithms ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2022-01-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Fast Distance Oracles for Any Symmetric Norm

    Deng, Yichuan / Song, Zhao / Weinstein, Omri / Zhang, Ruizhe

    2022  

    Abstract: In the Distance Oracle problem, the goal is to preprocess $n$ vectors $x_1, x_2, \cdots, x_n$ in a $d$-dimensional metric space $(\mathbb{X}^d, \| \cdot \|_l)$ into a cheap data structure, so that given a query vector $q \in \mathbb{X}^d$ and a subset $S\ ...

    Abstract In the Distance Oracle problem, the goal is to preprocess $n$ vectors $x_1, x_2, \cdots, x_n$ in a $d$-dimensional metric space $(\mathbb{X}^d, \| \cdot \|_l)$ into a cheap data structure, so that given a query vector $q \in \mathbb{X}^d$ and a subset $S\subseteq [n]$ of the input data points, all distances $\| q - x_i \|_l$ for $x_i\in S$ can be quickly approximated (faster than the trivial $\sim d|S|$ query time). This primitive is a basic subroutine in machine learning, data mining and similarity search applications. In the case of $\ell_p$ norms, the problem is well understood, and optimal data structures are known for most values of $p$. Our main contribution is a fast $(1+\varepsilon)$ distance oracle for any symmetric norm $\|\cdot\|_l$. This class includes $\ell_p$ norms and Orlicz norms as special cases, as well as other norms used in practice, e.g. top-$k$ norms, max-mixture and sum-mixture of $\ell_p$ norms, small-support norms and the box-norm. We propose a novel data structure with $\tilde{O}(n (d + \mathrm{mmc}(l)^2 ) )$ preprocessing time and space, and $t_q = \tilde{O}(d + |S| \cdot \mathrm{mmc}(l)^2)$ query time, for computing distances to a subset $S$ of data points, where $\mathrm{mmc}(l)$ is a complexity-measure (concentration modulus) of the symmetric norm. When $l = \ell_{p}$ , this runtime matches the aforementioned state-of-art oracles.
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 006
    Publishing date 2022-05-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Sparse Fourier Transform over Lattices

    Song, Zhao / Sun, Baocheng / Weinstein, Omri / Zhang, Ruizhe

    A Unified Approach to Signal Reconstruction

    2022  

    Abstract: We revisit the classical problem of band-limited signal reconstruction -- a variant of the *Set Query* problem -- which asks to efficiently reconstruct (a subset of) a $d$-dimensional Fourier-sparse signal ($\|\hat{x}(t)\|_0 \leq k$), from minimum noisy ... ...

    Abstract We revisit the classical problem of band-limited signal reconstruction -- a variant of the *Set Query* problem -- which asks to efficiently reconstruct (a subset of) a $d$-dimensional Fourier-sparse signal ($\|\hat{x}(t)\|_0 \leq k$), from minimum noisy samples of $x(t)$ in the time domain. We present a unified framework for this problem, by developing a theory of sparse Fourier transforms over *lattices*, which can be viewed as a "semi-continuous" version of SFT, in-between discrete and continuous domains. Using this framework, we obtain the following results: $\bullet$ *High-dimensional Fourier sparse recovery* We present a sample-optimal discrete Fourier Set-Query algorithm with $O(k^{\omega+1})$ reconstruction time in one dimension, independent of the signal's length ($n$) and $\ell_\infty$-norm ($R^* \approx \|\hat{x}\|_\infty$). This complements the state-of-art algorithm of [Kap17], whose reconstruction time is $\tilde{O}(k \log^2 n \log R^*)$, and is limited to low-dimensions. By contrast, our algorithm works for arbitrary $d$ dimensions, mitigating the $\exp(d)$ blowup in decoding time to merely linear in $d$. $\bullet$ *High-accuracy Fourier interpolation* We design a polynomial-time $(1+ \sqrt{2} +\epsilon)$-approximation algorithm for continuous Fourier interpolation. This bypasses a barrier of all previous algorithms [PS15, CKPS16] which only achieve $>100$ approximation for this problem. Our algorithm relies on several new ideas of independent interest in signal estimation, including high-sensitivity frequency estimation and new error analysis with sharper noise control. $\bullet$ *Fourier-sparse interpolation with optimal output sparsity* We give a $k$-Fourier-sparse interpolation algorithm with optimal output signal sparsity, improving on the approximation ratio, sample complexity and runtime of prior works [CKPS16, CP19].
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 518
    Publishing date 2022-05-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Training Overparametrized Neural Networks in Sublinear Time

    Hu, Hang / Song, Zhao / Weinstein, Omri / Zhuo, Danyang

    2022  

    Abstract: The success of deep learning comes at a tremendous computational and energy cost, and the scalability of training massively overparametrized neural networks is becoming a real barrier to the progress of AI. Despite the popularity and low cost-per- ... ...

    Abstract The success of deep learning comes at a tremendous computational and energy cost, and the scalability of training massively overparametrized neural networks is becoming a real barrier to the progress of AI. Despite the popularity and low cost-per-iteration of traditional Backpropagation via gradient decent, SGD has prohibitive convergence rate in non-convex settings, both in theory and practice. To mitigate this cost, recent works have proposed to employ alternative (Newton-type) training methods with much faster convergence rate, albeit with higher cost-per-iteration. For a typical neural network with $m=\mathrm{poly}(n)$ parameters and input batch of $n$ datapoints in $\mathbb{R}^d$, the previous work of [Brand, Peng, Song, and Weinstein, ITCS'2021] requires $\sim mnd + n^3$ time per iteration. In this paper, we present a novel training method that requires only $m^{1-\alpha} n d + n^3$ amortized time in the same overparametrized regime, where $\alpha \in (0.01,1)$ is some fixed constant. This method relies on a new and alternative view of neural networks, as a set of binary search trees, where each iteration corresponds to modifying a small subset of the nodes in the tree. We believe this view would have further applications in the design and analysis of DNNs.
    Keywords Computer Science - Machine Learning ; Computer Science - Data Structures and Algorithms ; Statistics - Machine Learning
    Subject code 006
    Publishing date 2022-08-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Dynamic Kernel Sparsifiers

    Deng, Yichuan / Jin, Wenyu / Song, Zhao / Sun, Xiaorui / Weinstein, Omri

    2022  

    Abstract: A geometric graph associated with a set of points $P= \{x_1, x_2, \cdots, x_n \} \subset \mathbb{R}^d$ and a fixed kernel function $\mathsf{K}:\mathbb{R}^d\times \mathbb{R}^d\to\mathbb{R}_{\geq 0}$ is a complete graph on $P$ such that the weight of edge $ ...

    Abstract A geometric graph associated with a set of points $P= \{x_1, x_2, \cdots, x_n \} \subset \mathbb{R}^d$ and a fixed kernel function $\mathsf{K}:\mathbb{R}^d\times \mathbb{R}^d\to\mathbb{R}_{\geq 0}$ is a complete graph on $P$ such that the weight of edge $(x_i, x_j)$ is $\mathsf{K}(x_i, x_j)$. We present a fully-dynamic data structure that maintains a spectral sparsifier of a geometric graph under updates that change the locations of points in $P$ one at a time. The update time of our data structure is $n^{o(1)}$ with high probability, and the initialization time is $n^{1+o(1)}$. Under certain assumption, we can provide a fully dynamic spectral sparsifier with the robostness to adaptive adversary. We further show that, for the Laplacian matrices of these geometric graphs, it is possible to maintain random sketches for the results of matrix vector multiplication and inverse-matrix vector multiplication in $n^{o(1)}$ time, under updates that change the locations of points in $P$ or change the query vector by a sparse difference.
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 519
    Publishing date 2022-11-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: Settling the relationship between Wilber's bounds for dynamic optimality

    Lecomte, Victor / Weinstein, Omri

    2019  

    Abstract: In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence $X \in [n]^m$. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture ( ... ...

    Abstract In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence $X \in [n]^m$. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture (Sleator and Tarjan, 1985), but their relationship remained unknown for more than three decades. We show that Wilber's Funnel bound dominates his Alternation bound for all $X$, and give a tight $\Theta(\lg\lg n)$ separation for some $X$, answering Wilber's conjecture and an open problem of Iacono, Demaine et. al. The main ingredient of the proof is a new "symmetric" characterization of Wilber's Funnel bound, which proves that it is invariant under rotations of $X$. We use this characterization to provide initial indication that the Funnel bound matches the Independent Rectangle bound (Demaine et al., 2009), by proving that when the Funnel bound is constant, $\mathsf{IRB}_{\diagup\hspace{-.6em}\square}$ is linear. To the best of our knowledge, our results provide the first progress on Wilber's conjecture that the Funnel bound is dynamically optimal (1986).

    Comment: ESA 2020; 25 pages, 18 figures; v3 applies reviewers' comments
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 511
    Publishing date 2019-12-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Book ; Online: An Adaptive Step Toward the Multiphase Conjecture

    Ko, Young Kun / Weinstein, Omri

    2019  

    Abstract: In 2010, P\v{a}tra\c{s}cu proposed the following three-phase dynamic problem, as a candidate for proving polynomial lower bounds on the operational time of dynamic data structures: I: Preprocess a collection of sets $\vec{S} = S_1, \ldots , S_k \subseteq ...

    Abstract In 2010, P\v{a}tra\c{s}cu proposed the following three-phase dynamic problem, as a candidate for proving polynomial lower bounds on the operational time of dynamic data structures: I: Preprocess a collection of sets $\vec{S} = S_1, \ldots , S_k \subseteq [n]$, where $k=\operatorname{poly}(n)$. II: A set $T\subseteq [n]$ is revealed, and the data structure updates its memory. III: An index $i \in [k]$ is revealed, and the data structure must determine if $S_i\cap T=^? \emptyset$. P\v{a}tra\c{s}cu conjectured that any data structure for the Multiphase problem must make $n^\epsilon$ cell-probes in either Phase II or III, and showed that this would imply similar unconditional lower bounds on many important dynamic data structure problems. Alas, there has been almost no progress on this conjecture in the past decade since its introduction. We show an $\tilde{\Omega}(\sqrt{n})$ cell-probe lower bound on the Multiphase problem for data structures with general (adaptive) updates, and queries with unbounded but "layered" adaptivity. This result captures all known set-intersection data structures and significantly strengthens previous Multiphase lower bounds, which only captured non-adaptive data structures. Our main technical result is a communication lower bound on a 4-party variant of P\v{a}tra\c{s}cu's Number-On-Forehead Multiphase game, using information complexity techniques. We also show that a lower bound on P\v{a}tra\c{s}cu's original NOF game would imply a polynomial ($n^{1+\epsilon}$) lower bound on the number of wires of any constant-depth circuit with arbitrary gates computing a random $\tilde{O}(n)\times n$ linear operator $x \mapsto Ax$, a long-standing open problem in circuit complexity. This suggests that the NOF conjecture is much stronger than its data structure counterpart.

    Comment: 26 pages, 4 figures
    Keywords Computer Science - Data Structures and Algorithms ; Computer Science - Computational Complexity
    Subject code 511
    Publishing date 2019-10-29
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: A Dynamic Fast Gaussian Transform

    Huang, Baihe / Song, Zhao / Weinstein, Omri / Zhang, Hengjie / Zhang, Ruizhe

    2022  

    Abstract: The Fast Gaussian Transform (FGT) enables subquadratic-time multiplication of an $n\times n$ Gaussian kernel matrix $\mathsf{K}_{i,j}= \exp ( - \| x_i - x_j \|_2^2 ) $ with an arbitrary vector $h \in \mathbb{R}^n$, where $x_1,\dots, x_n \in \mathbb{R}^d$ ...

    Abstract The Fast Gaussian Transform (FGT) enables subquadratic-time multiplication of an $n\times n$ Gaussian kernel matrix $\mathsf{K}_{i,j}= \exp ( - \| x_i - x_j \|_2^2 ) $ with an arbitrary vector $h \in \mathbb{R}^n$, where $x_1,\dots, x_n \in \mathbb{R}^d$ are a set of fixed source points. This kernel plays a central role in machine learning and random feature maps. Nevertheless, in most modern ML and data analysis applications, datasets are dynamically changing, and recomputing the FGT from scratch in (kernel-based) algorithms, incurs a major computational overhead ($\gtrsim n$ time for a single source update $\in \mathbb{R}^d$). These applications motivate the development of a dynamic FGT algorithm, which maintains a dynamic set of sources under kernel-density estimation (KDE) queries in sublinear time, while retaining Mat-Vec multiplication accuracy and speed. Our main result is an efficient dynamic FGT algorithm, supporting the following operations in $\log^{O(d)}(n/\epsilon)$ time: (1) Adding or deleting a source point, and (2) Estimating the "kernel-density" of a query point with respect to sources with $\epsilon$ additive accuracy. The core of the algorithm is a dynamic data structure for maintaining the "interaction rank" between source and target boxes, which we decouple into finite truncation of Taylor series and Hermite expansions.
    Keywords Computer Science - Data Structures and Algorithms
    Subject code 006
    Publishing date 2022-02-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top