LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Suchergebnis

Treffer 1 - 10 von insgesamt 55

Suchoptionen

  1. Artikel ; Online: Efficiency and tolerance of second-line triple BRAF inhibitor/MEK inhibitor/anti-PD1 combined therapy in BRAF mutated melanoma patients with central nervous system metastases occurring during first-line combined targeted therapy: a real-life survey.

    Fabre, Marie / Lamoureux, Anouck / Meunier, Laurent / Samaran, Quentin / Lesage, Candice / Girard, Céline / Du Thanh, Aurélie / Moulis, Lionel / Dereure, Olivier

    Melanoma research

    2024  Band 34, Heft 3, Seite(n) 241–247

    Abstract: Although current systemic therapies significantly improved the outcome of advanced melanoma, the prognosis of patient with central nervous system (CNS) metastases remains poor especially when clinically symptomatic. We aimed to investigate the efficiency ...

    Abstract Although current systemic therapies significantly improved the outcome of advanced melanoma, the prognosis of patient with central nervous system (CNS) metastases remains poor especially when clinically symptomatic. We aimed to investigate the efficiency of CNS targets and tolerance of second-line combined anti-PD1/dual-targeted anti-BRAF/anti-MEK therapy implemented in patients with CNS progression after initially efficient first-line combined targeted therapy in patients with BRAF-mutated melanoma in a real-life setting. A monocentric retrospective analysis including all such patients treated from January 2017 to January 2022 was conducted in our tertiary referral center. The response of CNS lesions to second-line triple therapy was assessed through monthly clinical and at least quarterly morphological (according to RECIST criteria) evaluation. Tolerance data were also collected. Seventeen patients were included with a mean follow-up of 2.59 (±2.43) months. Only 1 patient displayed a significant clinical and morphological response. No statistically significant difference was observed between patients receiving or not additional local therapy (mainly radiotherapy) as to response achievement. Immunotherapy was permanently discontinued in 1 patient owing to grade 4 toxicity. Mean PFS and OS after CNS progression were 2.59 and 4.12 months, respectively. In this real-life survey, the subsequent addition of anti-PD1 to combined targeted therapy in melanoma patients with upfront CNS metastases did not result in significant response of CNS targets in most BRAF mutated melanoma patients with secondary CNS progression after initially successful first-line combined targeted therapy.
    Mesh-Begriff(e) Humans ; Melanoma/drug therapy ; Melanoma/genetics ; Melanoma/pathology ; Female ; Male ; Proto-Oncogene Proteins B-raf/genetics ; Middle Aged ; Aged ; Retrospective Studies ; Central Nervous System Neoplasms/secondary ; Central Nervous System Neoplasms/drug therapy ; Adult ; Skin Neoplasms/drug therapy ; Skin Neoplasms/genetics ; Skin Neoplasms/pathology ; Protein Kinase Inhibitors/therapeutic use ; Protein Kinase Inhibitors/pharmacology ; Antineoplastic Combined Chemotherapy Protocols/therapeutic use ; Immune Checkpoint Inhibitors/therapeutic use ; Immune Checkpoint Inhibitors/pharmacology ; Mutation ; Programmed Cell Death 1 Receptor/antagonists & inhibitors ; Aged, 80 and over
    Chemische Substanzen Proto-Oncogene Proteins B-raf (EC 2.7.11.1) ; BRAF protein, human (EC 2.7.11.1) ; Protein Kinase Inhibitors ; Immune Checkpoint Inhibitors ; Programmed Cell Death 1 Receptor
    Sprache Englisch
    Erscheinungsdatum 2024-03-28
    Erscheinungsland England
    Dokumenttyp Journal Article
    ZDB-ID 1095779-0
    ISSN 1473-5636 ; 0960-8931
    ISSN (online) 1473-5636
    ISSN 0960-8931
    DOI 10.1097/CMR.0000000000000963
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Artikel ; Online: Infrared-induced hives.

    Aljaber, Faisal / Du-Thanh, Aurélie / Raison-Peyron, Nadia / Meunier, Laurent / Dereure, Olivier / Bourrain, Jean Luc

    The journal of allergy and clinical immunology. In practice

    2023  Band 11, Heft 8, Seite(n) 2581–2582

    Mesh-Begriff(e) Humans ; Urticaria/etiology ; Infrared Rays/adverse effects
    Sprache Englisch
    Erscheinungsdatum 2023-04-07
    Erscheinungsland United States
    Dokumenttyp Case Reports ; Journal Article
    ZDB-ID 2843237-X
    ISSN 2213-2201 ; 2213-2198
    ISSN (online) 2213-2201
    ISSN 2213-2198
    DOI 10.1016/j.jaip.2023.03.045
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  3. Buch ; Online: Towards Consistency in Adversarial Classification

    Meunier, Laurent / Ettedgui, Raphaël / Pinot, Rafael / Chevaleyre, Yann / Atif, Jamal

    2022  

    Abstract: In this paper, we study the problem of consistency in the context of adversarial examples. Specifically, we tackle the following question: can surrogate losses still be used as a proxy for minimizing the $0/1$ loss in the presence of an adversary that ... ...

    Abstract In this paper, we study the problem of consistency in the context of adversarial examples. Specifically, we tackle the following question: can surrogate losses still be used as a proxy for minimizing the $0/1$ loss in the presence of an adversary that alters the inputs at test-time? Different from the standard classification task, this question cannot be reduced to a point-wise minimization problem, and calibration needs not to be sufficient to ensure consistency. In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context. It is therefore necessary to design another class of surrogate functions that can be used to solve the adversarial consistency issue. As a first step towards designing such a class, we identify sufficient and necessary conditions for a surrogate loss to be calibrated in both the adversarial and standard settings. Finally, we give some directions for building a class of losses that could be consistent in the adversarial framework.
    Schlagwörter Computer Science - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2022-05-20
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  4. Buch ; Online: An Asymptotic Test for Conditional Independence using Analytic Kernel Embeddings

    Scetbon, Meyer / Meunier, Laurent / Romano, Yaniv

    2021  

    Abstract: We propose a new conditional dependence measure and a statistical test for conditional independence. The measure is based on the difference between analytic kernel embeddings of two well-suited distributions evaluated at a finite set of locations. We ... ...

    Abstract We propose a new conditional dependence measure and a statistical test for conditional independence. The measure is based on the difference between analytic kernel embeddings of two well-suited distributions evaluated at a finite set of locations. We obtain its asymptotic distribution under the null hypothesis of conditional independence and design a consistent statistical test from it. We conduct a series of experiments showing that our new test outperforms state-of-the-art methods both in terms of type-I and type-II errors even in the high dimensional setting.
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Erscheinungsdatum 2021-10-27
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  5. Buch ; Online: Asymptotic convergence rates for averaging strategies

    Meunier, Laurent / Legheraba, Iskander / Chevaleyre, Yann / Teytaud, Olivier

    2021  

    Abstract: Parallel black box optimization consists in estimating the optimum of a function using $\lambda$ parallel evaluations of $f$. Averaging the $\mu$ best individuals among the $\lambda$ evaluations is known to provide better estimates of the optimum of a ... ...

    Abstract Parallel black box optimization consists in estimating the optimum of a function using $\lambda$ parallel evaluations of $f$. Averaging the $\mu$ best individuals among the $\lambda$ evaluations is known to provide better estimates of the optimum of a function than just picking up the best. In continuous domains, this averaging is typically just based on (possibly weighted) arithmetic means. Previous theoretical results were based on quadratic objective functions. In this paper, we extend the results to a wide class of functions, containing three times continuously differentiable functions with unique optimum. We prove formal rate of convergences and show they are indeed better than pure random search asymptotically in $\lambda$. We validate our theoretical findings with experiments on some standard black box functions.
    Schlagwörter Mathematics - Optimization and Control ; Computer Science - Neural and Evolutionary Computing
    Erscheinungsdatum 2021-08-10
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  6. Buch ; Online: Scalable Lipschitz Residual Networks with Convex Potential Flows

    Meunier, Laurent / Delattre, Blaise / Araujo, Alexandre / Allauzen, Alexandre

    2021  

    Abstract: The Lipschitz constant of neural networks has been established as a key property to enforce the robustness of neural networks to adversarial examples. However, recent attempts to build $1$-Lipschitz Neural Networks have all shown limitations and ... ...

    Abstract The Lipschitz constant of neural networks has been established as a key property to enforce the robustness of neural networks to adversarial examples. However, recent attempts to build $1$-Lipschitz Neural Networks have all shown limitations and robustness have to be traded for accuracy and scalability or vice versa. In this work, we first show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation. From this insight, we leverage the work on Input Convex Neural Networks to parametrize efficient layers with this property. A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $\ell_2$ provable defenses. Indeed, we train very deep and wide neural networks (up to $1000$ layers) and reach state-of-the-art results in terms of standard and certified accuracy, along with empirical robustness, in comparison with other $1$-Lipschitz architectures.
    Schlagwörter Computer Science - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2021-10-25
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  7. Buch ; Online: On the Role of Randomization in Adversarially Robust Classification

    Gnecco-Heredia, Lucas / Chevaleyre, Yann / Negrevergne, Benjamin / Meunier, Laurent / Pydi, Muni Sreenivas

    2023  

    Abstract: Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has ... ...

    Abstract Deep neural networks are known to be vulnerable to small adversarial perturbations in test data. To defend against adversarial attacks, probabilistic classifiers have been proposed as an alternative to deterministic ones. However, literature has conflicting findings on the effectiveness of probabilistic classifiers in comparison to deterministic ones. In this paper, we clarify the role of randomization in building adversarially robust classifiers. Given a base hypothesis set of deterministic classifiers, we show the conditions under which a randomized ensemble outperforms the hypothesis set in adversarial risk, extending previous results. Additionally, we show that for any probabilistic binary classifier (including randomized ensembles), there exists a deterministic classifier that outperforms it. Finally, we give an explicit description of the deterministic hypothesis set that contains such a deterministic classifier for many types of commonly used probabilistic classifiers, i.e. randomized ensembles and parametric/input noise injection.

    Comment: 10 pages main paper (27 total), 2 figures in main paper. Neurips 2023
    Schlagwörter Computer Science - Machine Learning
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2023-02-14
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  8. Buch ; Online: Variance Reduction for Better Sampling in Continuous Domains

    Meunier, Laurent / Doerr, Carola / Rapin, Jeremy / Teytaud, Olivier

    2020  

    Abstract: Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm use a sample drawn according to some probability distribution for approximating the location of an optimum. Recent ... ...

    Abstract Design of experiments, random search, initialization of population-based methods, or sampling inside an epoch of an evolutionary algorithm use a sample drawn according to some probability distribution for approximating the location of an optimum. Recent papers have shown that the optimal search distribution, used for the sampling, might be more peaked around the center of the distribution than the prior distribution modelling our uncertainty about the location of the optimum. We confirm this statement, provide explicit values for this reshaping of the search distribution depending on the population size $\lambda$ and the dimension $d$, and validate our results experimentally.
    Schlagwörter Computer Science - Neural and Evolutionary Computing ; Computer Science - Machine Learning ; Statistics - Machine Learning
    Erscheinungsdatum 2020-04-24
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  9. Buch ; Online: Advocating for Multiple Defense Strategies against Adversarial Examples

    Araujo, Alexandre / Meunier, Laurent / Pinot, Rafael / Negrevergne, Benjamin

    2020  

    Abstract: It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa. In this paper we conduct a geometrical ... ...

    Abstract It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.

    Comment: Workshop on Machine Learning for CyberSecurity (MLCS@ECML-PKDD)
    Schlagwörter Computer Science - Machine Learning
    Erscheinungsdatum 2020-12-04
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  10. Buch ; Online: Equitable and Optimal Transport with Multiple Agents

    Scetbon, Meyer / Meunier, Laurent / Atif, Jamal / Cuturi, Marco

    2020  

    Abstract: We introduce an extension of the Optimal Transport problem when multiple costs are involved. Considering each cost as an agent, we aim to share equally between agents the work of transporting one distribution to another. To do so, we minimize the ... ...

    Abstract We introduce an extension of the Optimal Transport problem when multiple costs are involved. Considering each cost as an agent, we aim to share equally between agents the work of transporting one distribution to another. To do so, we minimize the transportation cost of the agent who works the most. Another point of view is when the goal is to partition equitably goods between agents according to their heterogeneous preferences. Here we aim to maximize the utility of the least advantaged agent. This is a fair division problem. Like Optimal Transport, the problem can be cast as a linear optimization problem. When there is only one agent, we recover the Optimal Transport problem. When two agents are considered, we are able to recover Integral Probability Metrics defined by $\alpha$-H\"older functions, which include the widely-known Dudley metric. To the best of our knowledge, this is the first time a link is given between the Dudley metric and Optimal Transport. We provide an entropic regularization of that problem which leads to an alternative algorithm faster than the standard linear program.
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning ; Mathematics - Optimization and Control
    Thema/Rubrik (Code) 006
    Erscheinungsdatum 2020-06-12
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang