LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Suchergebnis

Treffer 1 - 10 von insgesamt 40

Suchoptionen

  1. Artikel ; Online: Contrast trees and distribution boosting.

    Friedman, Jerome H

    Proceedings of the National Academy of Sciences of the United States of America

    2020  Band 117, Heft 35, Seite(n) 21175–21184

    Abstract: A method for decision tree induction is presented. Given a set of predictor variables [Formula: see text] and two outcome variables y and z associated with each x, the goal is to identify those values of x for which the respective distributions of [ ... ...

    Abstract A method for decision tree induction is presented. Given a set of predictor variables [Formula: see text] and two outcome variables y and z associated with each x, the goal is to identify those values of x for which the respective distributions of [Formula: see text] and [Formula: see text], or selected properties of those distributions such as means or quantiles, are most different. Contrast trees provide a lack-of-fit measure for statistical models of such statistics, or for the complete conditional distribution [Formula: see text], as a function of x. They are easily interpreted and can be used as diagnostic tools to reveal and then understand the inaccuracies of models produced by any learning method. A corresponding contrast-boosting strategy is described for remedying any uncovered errors, thereby producing potentially more accurate predictions. This leads to a distribution-boosting strategy for directly estimating the full conditional distribution of y at each x under no assumptions concerning its shape, form, or parametric representation.
    Sprache Englisch
    Erscheinungsdatum 2020-08-19
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.1921562117
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: Contrast Trees and Distribution Boosting

    Friedman, Jerome H.

    2019  

    Abstract: Often machine learning methods are applied and results reported in cases where there is little to no information concerning accuracy of the output. Simply because a computer program returns a result does not insure its validity. If decisions are to be ... ...

    Abstract Often machine learning methods are applied and results reported in cases where there is little to no information concerning accuracy of the output. Simply because a computer program returns a result does not insure its validity. If decisions are to be made based on such results it is important to have some notion of their veracity. Contrast trees represent a new approach for assessing the accuracy of many types of machine learning estimates that are not amenable to standard (cross) validation methods. In situations where inaccuracies are detected boosted contrast trees can often improve performance. A special case, distribution boosting, provides an assumption free method for estimating the full probability distribution of an outcome variable given any set of joint input predictor variable values.

    Comment: 18 pages, 20 figures
    Schlagwörter Statistics - Machine Learning ; Computer Science - Machine Learning
    Erscheinungsdatum 2019-12-08
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  3. Artikel: A Pliable Lasso.

    Tibshirani, Robert / Friedman, Jerome

    Journal of computational and graphical statistics : a joint publication of American Statistical Association, Institute of Mathematical Statistics, Interface Foundation of North America

    2020  Band 29, Heft 1, Seite(n) 215–225

    Abstract: We propose a generalization of the lasso that allows the model coefficients to vary as a function of a general set of some prespecified modifying variables. These modifiers might be variables such as gender, age, or time. The paradigm is quite general, ... ...

    Abstract We propose a generalization of the lasso that allows the model coefficients to vary as a function of a general set of some prespecified modifying variables. These modifiers might be variables such as gender, age, or time. The paradigm is quite general, with each lasso coefficient modified by a sparse linear function of the modifying variables
    Sprache Englisch
    Erscheinungsdatum 2020-09-05
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 2014382-5
    ISSN 1537-2715 ; 1061-8600
    ISSN (online) 1537-2715
    ISSN 1061-8600
    DOI 10.1080/10618600.2019.1648271
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  4. Artikel ; Online: Representational Gradient Boosting: Backpropagation in the Space of Functions.

    Valdes, Gilmer / Friedman, Jerome H / Jiang, Fei / Gennatas, Efstathios D

    IEEE transactions on pattern analysis and machine intelligence

    2022  Band 44, Heft 12, Seite(n) 10186–10195

    Abstract: The estimation of nested functions (i.e., functions of functions) is one of the central reasons for the success and popularity of machine learning. Today, artificial neural networks are the predominant class of algorithms in this area, known as ... ...

    Abstract The estimation of nested functions (i.e., functions of functions) is one of the central reasons for the success and popularity of machine learning. Today, artificial neural networks are the predominant class of algorithms in this area, known as representational learning. Here, we introduce Representational Gradient Boosting (RGB), a nonparametric algorithm that estimates functions with multi-layer architectures obtained using backpropagation in the space of functions. RGB does not need to assume a functional form in the nodes or output (e.g., linear models or rectified linear units), but rather estimates these transformations. RGB can be seen as an optimized stacking procedure where a meta algorithm learns how to combine different classes of functions (e.g., Neural Networks (NN) and Gradient Boosting (GB)), while building and optimizing them jointly in an attempt to compensate each other's weaknesses. This highlights a stark difference with current approaches to meta-learning that combine models only after they have been built independently. We showed that providing optimized stacking is one of the main advantages of RGB over current approaches. Additionally, due to the nested nature of RGB we also showed how it improves over GB in problems that have several high-order interactions. Finally, we investigate both theoretically and in practice the problem of recovering nested functions and the value of prior knowledge.
    Mesh-Begriff(e) Algorithms ; Neural Networks, Computer ; Machine Learning
    Sprache Englisch
    Erscheinungsdatum 2022-11-07
    Erscheinungsland United States
    Dokumenttyp Journal Article ; Research Support, N.I.H., Extramural
    ISSN 1939-3539
    ISSN (online) 1939-3539
    DOI 10.1109/TPAMI.2021.3137715
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  5. Artikel ; Online: Discussion of “Prediction, Estimation, and Attribution” by Bradley Efron

    Friedman, Jerome / Hastie, Trevor / Tibshirani, Robert

    Journal of the American Statistical Association. 2020 Apr. 2, v. 115, no. 530 p.665-666

    2020  

    Abstract: Professor Efron has presented us with a thought-provoking paper on the relationship between prediction, estimation, and attribution in the modern era of data science. While we appreciate many of his arguments, we see more of a continuum between the old ... ...

    Abstract Professor Efron has presented us with a thought-provoking paper on the relationship between prediction, estimation, and attribution in the modern era of data science. While we appreciate many of his arguments, we see more of a continuum between the old and new methodology, and the opportunity for both to improve through their synergy.
    Schlagwörter Americans ; journals ; methodology ; prediction
    Sprache Englisch
    Erscheinungsverlauf 2020-0402
    Umfang p. 665-666.
    Erscheinungsort Taylor & Francis
    Dokumenttyp Artikel ; Online
    ZDB-ID 2064981-2
    ISSN 1537-274X
    ISSN 1537-274X
    DOI 10.1080/01621459.2020.1762617
    Datenquelle NAL Katalog (AGRICOLA)

    Zusatzmaterialien

    Kategorien

  6. Buch ; Online: Lockout

    Valdes, Gilmer / Arbelo, Wilmer / Interian, Yannet / Friedman, Jerome H.

    Sparse Regularization of Neural Networks

    2021  

    Abstract: Many regression and classification procedures fit a parameterized function $f(x;w)$ of predictor variables $x$ to data $\{x_{i},y_{i}\}_1^N$ based on some loss criterion $L(y,f)$. Often, regularization is applied to improve accuracy by placing a ... ...

    Abstract Many regression and classification procedures fit a parameterized function $f(x;w)$ of predictor variables $x$ to data $\{x_{i},y_{i}\}_1^N$ based on some loss criterion $L(y,f)$. Often, regularization is applied to improve accuracy by placing a constraint $P(w)\leq t$ on the values of the parameters $w$. Although efficient methods exist for finding solutions to these constrained optimization problems for all values of $t\geq0$ in the special case when $f$ is a linear function, none are available when $f$ is non-linear (e.g. Neural Networks). Here we present a fast algorithm that provides all such solutions for any differentiable function $f$ and loss $L$, and any constraint $P$ that is an increasing monotone function of the absolute value of each parameter. Applications involving sparsity inducing regularization of arbitrary Neural Networks are discussed. Empirical results indicate that these sparse solutions are usually superior to their dense counterparts in both accuracy and interpretability. This improvement in accuracy can often make Neural Networks competitive with, and sometimes superior to, state-of-the-art methods in the analysis of tabular data.
    Schlagwörter Computer Science - Machine Learning ; Statistics - Machine Learning
    Thema/Rubrik (Code) 519 ; 006
    Erscheinungsdatum 2021-07-15
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  7. Artikel: Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.

    Simon, Noah / Friedman, Jerome / Hastie, Trevor / Tibshirani, Rob

    Journal of statistical software

    2016  Band 39, Heft 5, Seite(n) 1–13

    Abstract: We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ... ...

    Abstract We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ
    Sprache Englisch
    Erscheinungsdatum 2016-03-29
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 2010240-9
    ISSN 1548-7660
    ISSN 1548-7660
    DOI 10.18637/jss.v039.i05
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  8. Buch: The elements of statistical learning

    Hastie, Trevor / Friedman, Jerome H / Tibshirani, Robert

    data mining, inference, and prediction

    (Springer series in statistics)

    2013  

    Verfasserangabe Trevor Hastie; Robert Tibshirani; Jerome Friedman
    Serientitel Springer series in statistics
    Schlagwörter Maschinelles Lernen ; Statistik
    Sprache Englisch
    Umfang XXII, 745 S., Ill., graph. Darst.
    Ausgabenhinweis 2. ed., corr. at 7. printing
    Dokumenttyp Buch
    Anmerkung Literaturverz. S. [699] - 727
    ISBN 9780387848570 ; 9780387848587 ; 0387848576 ; 0387848584
    Datenquelle Bundesinstitut für Risikobewertung

    Zusatzmaterialien

    Kategorien

  9. Artikel: SparseNet

    Mazumder, Rahul / Friedman, Jerome H / Hastie, Trevor

    Journal of the American Statistical Association

    2011  Band 106, Heft 495, Seite(n) 1125–1138

    Abstract: We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue ...

    Abstract We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a
    Sprache Englisch
    Erscheinungsdatum 2011
    Erscheinungsland United States
    Dokumenttyp Journal Article
    ZDB-ID 2064981-2
    ISSN 1537-274X ; 0162-1459 ; 0003-1291
    ISSN (online) 1537-274X
    ISSN 0162-1459 ; 0003-1291
    DOI 10.1198/jasa.2011.tm09738
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

  10. Artikel ; Online: Reply to Nock and Nielsen: On the work of Nock and Nielsen and its relationship to the additive tree.

    Valdes, Gilmer / Luna, José Marcio / Gennatas, Efstathios D / Ungar, Lyle H / Eaton, Eric / Diffenderfer, Eric S / Jensen, Shane T / Simone, Charles B / Friedman, Jerome H / Solberg, Timothy D

    Proceedings of the National Academy of Sciences of the United States of America

    2020  Band 117, Heft 16, Seite(n) 8694–8695

    Mesh-Begriff(e) Decision Trees
    Sprache Englisch
    Erscheinungsdatum 2020-04-07
    Erscheinungsland United States
    Dokumenttyp Letter ; Comment
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2002399117
    Datenquelle MEDical Literature Analysis and Retrieval System OnLINE

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang