LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 17

Search options

  1. Article ; Online: The Inversion Problem: Why Algorithms Should Infer Mental State and Not Just Predict Behavior.

    Kleinberg, Jon / Ludwig, Jens / Mullainathan, Sendhil / Raghavan, Manish

    Perspectives on psychological science : a journal of the Association for Psychological Science

    2023  , Page(s) 17456916231212138

    Abstract: More and more machine learning is applied to human behavior. Increasingly these algorithms suffer from a hidden-but serious-problem. It arises because they often predict one thing while hoping for another. Take a recommender system: It predicts clicks ... ...

    Abstract More and more machine learning is applied to human behavior. Increasingly these algorithms suffer from a hidden-but serious-problem. It arises because they often predict one thing while hoping for another. Take a recommender system: It predicts clicks but hopes to identify preferences. Or take an algorithm that automates a radiologist: It predicts in-the-moment diagnoses while hoping to identify their reflective judgments. Psychology shows us the gaps between the objectives of such prediction tasks and the goals we hope to achieve: People can click mindlessly; experts can get tired and make systematic errors. We argue such situations are ubiquitous and call them "inversion problems": The real goal requires understanding a mental state that is not directly measured in behavioral data but must instead be inverted from the behavior. Identifying and solving these problems require new tools that draw on both behavioral and computational science.
    Language English
    Publishing date 2023-12-12
    Publishing country United States
    Document type Journal Article
    ZDB-ID 2224911-4
    ISSN 1745-6924 ; 1745-6916
    ISSN (online) 1745-6924
    ISSN 1745-6916
    DOI 10.1177/17456916231212138
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Distinguishing the Indistinguishable

    Alur, Rohan / Raghavan, Manish / Shah, Devavrat

    Human Expertise in Algorithmic Prediction

    2024  

    Abstract: We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach focuses on the use of human judgment to distinguish inputs which `look the same' to any feasible predictive algorithm. We argue that this framing ... ...

    Abstract We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach focuses on the use of human judgment to distinguish inputs which `look the same' to any feasible predictive algorithm. We argue that this framing clarifies the problem of human/AI collaboration in prediction tasks, as experts often have access to information -- particularly subjective information -- which is not encoded in the algorithm's training data. We use this insight to develop a set of principled algorithms for selectively incorporating human feedback only when it improves the performance of any feasible predictor. We find empirically that although algorithms often outperform their human counterparts on average, human judgment can significantly improve algorithmic predictions on specific instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration.
    Keywords Computer Science - Machine Learning ; Computer Science - Artificial Intelligence ; Computer Science - Human-Computer Interaction
    Subject code 006
    Publishing date 2024-02-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article ; Online: Algorithmic monoculture and social welfare.

    Kleinberg, Jon / Raghavan, Manish

    Proceedings of the National Academy of Sciences of the United States of America

    2021  Volume 118, Issue 22

    Abstract: As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same ... ...

    Abstract As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here, we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under "normal" operations and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives.
    MeSH term(s) Algorithms ; Culture ; Humans ; Models, Theoretical ; Social Welfare
    Language English
    Publishing date 2021-05-25
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2018340118
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Book ; Online: The Right to be an Exception to a Data-Driven Rule

    Cen, Sarah H. / Raghavan, Manish

    2022  

    Abstract: Data-driven tools are increasingly used to make consequential decisions. They have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. In such ... ...

    Abstract Data-driven tools are increasingly used to make consequential decisions. They have begun to advise employers on which job applicants to interview, judges on which defendants to grant bail, lenders on which homeowners to give loans, and more. In such settings, different data-driven rules result in different decisions. The problem is: to every data-driven rule, there are exceptions. While a data-driven rule may be appropriate for some, it may not be appropriate for all. As data-driven decisions become more common, there are cases in which it becomes necessary to protect the individuals who, through no fault of their own, are the data-driven exceptions. At the same time, it is impossible to scrutinize every one of the increasing number of data-driven decisions, begging the question: When and how should data-driven exceptions be protected? In this piece, we argue that individuals have the right to be an exception to a data-driven rule. That is, the presumption should not be that a data-driven rule--even one with high accuracy--is suitable for an arbitrary decision-subject of interest. Rather, a decision-maker should apply the rule only if they have exercised due care and due diligence (relative to the risk of harm) in excluding the possibility that the decision-subject is an exception to the data-driven rule. In some cases, the risk of harm may be so low that only cursory consideration is required. Although applying due care and due diligence is meaningful in human-driven decision contexts, it is unclear what it means for a data-driven rule to do so. We propose that determining whether a data-driven rule is suitable for a given decision-subject requires the consideration of three factors: individualization, uncertainty, and harm. We unpack this right in detail, providing a framework for assessing data-driven rules and describing what it would mean to invoke the right in practice.

    Comment: 22 pages, 0 figures
    Keywords Computer Science - Computers and Society
    Subject code 330
    Publishing date 2022-12-28
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Content Moderation and the Formation of Online Communities

    Dwork, Cynthia / Hays, Chris / Kleinberg, Jon / Raghavan, Manish

    A Theoretical Framework

    2023  

    Abstract: We study the impact of content moderation policies in online communities. In our theoretical model, a platform chooses a content moderation policy and individuals choose whether or not to participate in the community according to the fraction of user ... ...

    Abstract We study the impact of content moderation policies in online communities. In our theoretical model, a platform chooses a content moderation policy and individuals choose whether or not to participate in the community according to the fraction of user content that aligns with their preferences. The effects of content moderation, at first blush, might seem obvious: it restricts speech on a platform. However, when user participation decisions are taken into account, its effects can be more subtle $\unicode{x2013}$ and counter-intuitive. For example, our model can straightforwardly demonstrate how moderation policies may increase participation and diversify content available on the platform. In our analysis, we explore a rich set of interconnected phenomena related to content moderation in online communities. We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities. Building on this, we explore how resource-limited or ideological platforms might set policies, how communities are affected by differing levels of personalization, and competition between platforms. Our model provides a vocabulary and mathematically tractable framework for analyzing platform decisions about content moderation.

    Comment: 46 pages, 10 figures
    Keywords Computer Science - Data Structures and Algorithms ; Computer Science - Computers and Society ; Computer Science - Social and Information Networks
    Subject code 303
    Publishing date 2023-10-16
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Algorithmic Monoculture and Social Welfare

    Kleinberg, Jon / Raghavan, Manish

    2021  

    Abstract: As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same ... ...

    Abstract As algorithms are increasingly applied to screen applicants for high-stakes decisions in employment, lending, and other domains, concerns have been raised about the effects of algorithmic monoculture, in which many decision-makers all rely on the same algorithm. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under "normal" operations, and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions, and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives.

    Comment: A version of this paper appears in Proceedings of the National Academy of Sciences at https://www.pnas.org/content/118/22/e2018340118
    Keywords Computer Science - Computer Science and Game Theory ; Computer Science - Computers and Society ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2021-01-14
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: Simplistic Collection and Labeling Practices Limit the Utility of Benchmark Datasets for Twitter Bot Detection

    Hays, Chris / Schutzman, Zachary / Raghavan, Manish / Walk, Erin / Zimmer, Philipp

    2023  

    Abstract: Accurate bot detection is necessary for the safety and integrity of online platforms. It is also crucial for research on the influence of bots in elections, the spread of misinformation, and financial market manipulation. Platforms deploy infrastructure ... ...

    Abstract Accurate bot detection is necessary for the safety and integrity of online platforms. It is also crucial for research on the influence of bots in elections, the spread of misinformation, and financial market manipulation. Platforms deploy infrastructure to flag or remove automated accounts, but their tools and data are not publicly available. Thus, the public must rely on third-party bot detection. These tools employ machine learning and often achieve near perfect performance for classification on existing datasets, suggesting bot detection is accurate, reliable and fit for use in downstream applications. We provide evidence that this is not the case and show that high performance is attributable to limitations in dataset collection and labeling rather than sophistication of the tools. Specifically, we show that simple decision rules -- shallow decision trees trained on a small number of features -- achieve near-state-of-the-art performance on most available datasets and that bot detection datasets, even when combined together, do not generalize well to out-of-sample datasets. Our findings reveal that predictions are highly dependent on each dataset's collection and labeling procedures rather than fundamental differences between bots and humans. These results have important implications for both transparency in sampling and labeling procedures and potential biases in research using existing bot detection tools for pre-processing.

    Comment: 10 pages, 6 figures; updated citation, clarified language
    Keywords Computer Science - Machine Learning ; Computer Science - Social and Information Networks
    Subject code 006
    Publishing date 2023-01-17
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Book ; Online: The Challenge of Understanding What Users Want

    Kleinberg, Jon / Mullainathan, Sendhil / Raghavan, Manish

    Inconsistent Preferences and Engagement Optimization

    2022  

    Abstract: Online platforms have a wealth of data, run countless experiments and use industrial-scale algorithms to optimize user experience. Despite this, many users seem to regret the time they spend on these platforms. One possible explanation is misaligned ... ...

    Abstract Online platforms have a wealth of data, run countless experiments and use industrial-scale algorithms to optimize user experience. Despite this, many users seem to regret the time they spend on these platforms. One possible explanation is misaligned incentives: platforms are not optimizing for user happiness. We suggest the problem runs deeper, transcending the specific incentives of any particular platform, and instead stems from a mistaken revealed-preference assumption: To understand what users want, platforms look at what users do. Yet research has demonstrated, and personal experience affirms, that we often make choices in the moment that are inconsistent with what we actually want. In this work, we develop a model of media consumption where users have inconsistent preferences. We consider an altruistic platform which simply wants to maximize user utility, but only observes user engagement. We show how our model of users' preference inconsistencies produces phenomena that are familiar from everyday experience, but difficult to capture in traditional user interaction models. A key ingredient in our model is a formulation for how platforms determine what to show users: they optimize over a large set of potential content (the content manifold) parametrized by underlying features of the content. Whether improving engagement improves user welfare depends on the direction of movement in the content manifold: for certain directions of change, increasing engagement makes users less happy, while in other directions, increasing engagement makes users happier. We characterize the structure of content manifolds for which increasing engagement fails to increase user utility. By linking these effects to abstractions of platform design choices, our model thus creates a theoretical framework and vocabulary in which to explore interactions between design, behavioral science, and social media.
    Keywords Computer Science - Social and Information Networks ; Computer Science - Computers and Society ; Computer Science - Computer Science and Game Theory
    Subject code 005
    Publishing date 2022-02-23
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  9. Article ; Online: Human bias in algorithm design.

    Morewedge, Carey K / Mullainathan, Sendhil / Naushan, Haaya F / Sunstein, Cass R / Kleinberg, Jon / Raghavan, Manish / Ludwig, Jens O

    Nature human behaviour

    2023  Volume 7, Issue 11, Page(s) 1822–1824

    MeSH term(s) Humans ; Bias ; Algorithms
    Language English
    Publishing date 2023-11-20
    Publishing country England
    Document type Journal Article
    ISSN 2397-3374
    ISSN (online) 2397-3374
    DOI 10.1038/s41562-023-01724-4
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Book ; Online: Auditing for Human Expertise

    Alur, Rohan / Laine, Loren / Li, Darrick K. / Raghavan, Manish / Shah, Devavrat / Shung, Dennis

    2023  

    Abstract: High-stakes prediction tasks (e.g., patient diagnosis) are often handled by trained human experts. A common source of concern about automation in these settings is that experts may exercise intuition that is difficult to model and/or have access to ... ...

    Abstract High-stakes prediction tasks (e.g., patient diagnosis) are often handled by trained human experts. A common source of concern about automation in these settings is that experts may exercise intuition that is difficult to model and/or have access to information (e.g., conversations with a patient) that is simply unavailable to a would-be algorithm. This raises a natural question whether human experts add value which could not be captured by an algorithmic predictor. We develop a statistical framework under which we can pose this question as a natural hypothesis test. Indeed, as our framework highlights, detecting human expertise is more subtle than simply comparing the accuracy of expert predictions to those made by a particular learning algorithm. Instead, we propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest after conditioning on the available inputs (`features'). A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data, and has direct implications for whether human-AI `complementarity' is achievable in a given prediction task. We highlight the utility of our procedure using admissions data collected from the emergency department of a large academic hospital system, where we show that physicians' admit/discharge decisions for patients with acute gastrointestinal bleeding (AGIB) appear to be incorporating information that is not available to a standard algorithmic screening tool. This is despite the fact that the screening tool is arguably more accurate than physicians' discretionary decisions, highlighting that -- even absent normative concerns about accountability or interpretability -- accuracy is insufficient to justify algorithmic automation.

    Comment: 30 pages, 10 figures. To appear in the proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
    Keywords Statistics - Machine Learning ; Computer Science - Computers and Society ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-06-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top