LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 6 of total 6

Search options

  1. Book ; Online: Generative Expressive Robot Behaviors using Large Language Models

    Mahadevan, Karthik / Chien, Jonathan / Brown, Noah / Xu, Zhuo / Parada, Carolina / Xia, Fei / Zeng, Andy / Takayama, Leila / Sadigh, Dorsa

    2024  

    Abstract: People employ expressive behaviors to effectively communicate and coordinate their actions with others, such as nodding to acknowledge a person glancing at them or saying "excuse me" to pass people in a busy corridor. We would like robots to also ... ...

    Abstract People employ expressive behaviors to effectively communicate and coordinate their actions with others, such as nodding to acknowledge a person glancing at them or saying "excuse me" to pass people in a busy corridor. We would like robots to also demonstrate expressive behaviors in human-robot interaction. Prior work proposes rule-based methods that struggle to scale to new communication modalities or social situations, while data-driven methods require specialized datasets for each social situation the robot is used in. We propose to leverage the rich social context available from large language models (LLMs) and their ability to generate motion based on instructions or user preferences, to generate expressive robot motion that is adaptable and composable, building upon each other. Our approach utilizes few-shot chain-of-thought prompting to translate human language instructions into parametrized control code using the robot's available and learned skills. Through user studies and simulation experiments, we demonstrate that our approach produces behaviors that users found to be competent and easy to understand. Supplementary material can be found at https://generative-expressive-motion.github.io/.
    Keywords Computer Science - Robotics
    Subject code 629
    Publishing date 2024-01-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: A Protocol for Validating Social Navigation Policies

    Pirk, Sören / Lee, Edward / Xiao, Xuesu / Takayama, Leila / Francis, Anthony / Toshev, Alexander

    2022  

    Abstract: Enabling socially acceptable behavior for situated agents is a major goal of recent robotics research. Robots should not only operate safely around humans, but also abide by complex social norms. A key challenge for developing socially-compliant policies ...

    Abstract Enabling socially acceptable behavior for situated agents is a major goal of recent robotics research. Robots should not only operate safely around humans, but also abide by complex social norms. A key challenge for developing socially-compliant policies is measuring the quality of their behavior. Social behavior is enormously complex, making it difficult to create reliable metrics to gauge the performance of algorithms. In this paper, we propose a protocol for social navigation benchmarking that defines a set of canonical social navigation scenarios and an in-situ metric for evaluating performance on these scenarios using questionnaires. Our experiments show this protocol is realistic, scalable, and repeatable across runs and physical spaces. Our protocol can be replicated verbatim or it can be used to define a social navigation benchmark for novel scenarios. Our goal is to introduce a protocol for benchmarking social scenarios that is homogeneous and comparable.

    Comment: IEEE International Conference on Robotics and Automation; Workshop: Social Robot Navigation: Advances and Evaluation
    Keywords Computer Science - Robotics ; Computer Science - Human-Computer Interaction
    Subject code 629
    Publishing date 2022-04-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Remote Observation of Field Work on the Farm

    Ju, Wendy / Mandel, Ilan / Weatherwax, Kevin / Takayama, Leila / Martelaro, Nikolas / Willett, Denis

    2021  

    Abstract: Travel restrictions and social distancing measures make it difficult to observe, monitor or manage physical fieldwork. We describe research in progress that applies technologies for real-time remote observation and conversation in on-road vehicles to ... ...

    Abstract Travel restrictions and social distancing measures make it difficult to observe, monitor or manage physical fieldwork. We describe research in progress that applies technologies for real-time remote observation and conversation in on-road vehicles to observe field work on a farm. We collaborated on a pilot deployment of this project at Kreher Eggs in upstate New York. We instrumented a tractor with equipment to remotely observe and interview farm workers performing vehicle-related work. This work was initially undertaken to allow sustained observation of field work over longer periods of time from geographically distant locales; given our current situation, this work provides a case study in how to perform observational research when geographic and bodily distance have become the norm. We discuss our experiences and provide some preliminary insights for others looking to conduct remote observational research in the field.

    Comment: Presented at Microsoft Future of Work Symposium, August 3-5, 2020
    Keywords Computer Science - Computers and Society ; Computer Science - Human-Computer Interaction
    Publishing date 2021-03-04
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Robots That Ask For Help

    Ren, Allen Z. / Dixit, Anushri / Bodrova, Alexandra / Singh, Sumeet / Tu, Stephen / Brown, Noah / Xu, Peng / Takayama, Leila / Xia, Fei / Varley, Jake / Xu, Zhenjia / Sadigh, Dorsa / Zeng, Andy / Majumdar, Anirudha

    Uncertainty Alignment for Large Language Model Planners

    2023  

    Abstract: Large language models (LLMs) exhibit a wide range of promising capabilities -- from step-by-step planning to commonsense reasoning -- that may provide utility for robots, but remain prone to confidently hallucinated predictions. In this work, we present ... ...

    Abstract Large language models (LLMs) exhibit a wide range of promising capabilities -- from step-by-step planning to commonsense reasoning -- that may provide utility for robots, but remain prone to confidently hallucinated predictions. In this work, we present KnowNo, which is a framework for measuring and aligning the uncertainty of LLM-based planners such that they know when they don't know and ask for help when needed. KnowNo builds on the theory of conformal prediction to provide statistical guarantees on task completion while minimizing human help in complex multi-step planning settings. Experiments across a variety of simulated and real robot setups that involve tasks with different modes of ambiguity (e.g., from spatial to numeric uncertainties, from human preferences to Winograd schemas) show that KnowNo performs favorably over modern baselines (which may involve ensembles or extensive prompt tuning) in terms of improving efficiency and autonomy, while providing formal assurances. KnowNo can be used with LLMs out of the box without model-finetuning, and suggests a promising lightweight approach to modeling uncertainty that can complement and scale with the growing capabilities of foundation models. Website: https://robot-help.github.io

    Comment: Conference on Robot Learning (CoRL) 2023, Oral Presentation
    Keywords Computer Science - Robotics ; Computer Science - Artificial Intelligence ; Statistics - Applications
    Subject code 629
    Publishing date 2023-07-04
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Article: Assessing the effectiveness of interactive media in improving drowsy driver safety.

    Takayama, Leila / Nass, Clifford

    Human factors

    2008  Volume 50, Issue 5, Page(s) 772–781

    Abstract: Objective: This study investigated the possibility of using interactive media to help drowsy drivers wake up, thereby enabling them to drive more safely.: Background: Many studies have investigated the negative impacts of driver drowsiness and ... ...

    Abstract Objective: This study investigated the possibility of using interactive media to help drowsy drivers wake up, thereby enabling them to drive more safely.
    Background: Many studies have investigated the negative impacts of driver drowsiness and distraction in cars, separately. However, none has studied the potentially positive effects of slightly interactive media for rousing drowsy drivers to help them drive more safely.
    Method: In a 2 (drowsy vs. nondrowsy drivers) x 2 (passive vs. slightly interactive voice-based media) x 2 (monotonous vs. varied driving courses) study, participants (N = 79) used a driving simulator while interacting with a language-learning system that was either passive (i.e., drivers merely listen to phrases in another language) or slightly interactive (i.e., drivers verbally repeat those phrases).
    Results: (a) Drowsy drivers preferred and drove more safely with slightly interactive media, as compared with passive media. (b) Interactive media did not harm nondrowsy driver safety. (c) Drivers drove more safely on varied driving courses than on monotonous ones.
    Conclusion: Slightly interactive media hold the potential to improve the performance of drowsy drivers on the primary task of driving safely.
    Application: Applications include the design of interactive systems that increase user alertness, safety, and engagement on primary tasks, as opposed to systems that take attentional resources away from the primary task of driving.
    MeSH term(s) Accidents, Traffic/prevention & control ; Adolescent ; Adult ; Automobile Driving ; Female ; Humans ; Male ; Multimedia ; Sleep Stages ; User-Computer Interface ; Young Adult
    Language English
    Publishing date 2008-10
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 212725-8
    ISSN 1547-8181 ; 0018-7208
    ISSN (online) 1547-8181
    ISSN 0018-7208
    DOI 10.1518/001872008X312341
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Article ; Online: Communication and knowledge sharing in human-robot interaction and learning from demonstration.

    Koenig, Nathan / Takayama, Leila / Matarić, Maja

    Neural networks : the official journal of the International Neural Network Society

    2010  Volume 23, Issue 8-9, Page(s) 1104–1112

    Abstract: Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need ... ...

    Abstract Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user's visual observation of the robot and the robot's auditory cues affect the user's ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot's movements.
    MeSH term(s) Acoustic Stimulation ; Adult ; Artificial Intelligence ; Communication ; Computer Graphics ; Cues ; Feedback, Psychological ; Female ; Humans ; Knowledge ; Learning/physiology ; Male ; Middle Aged ; Neuropsychological Tests ; Photic Stimulation ; Robotics ; Social Environment ; Teaching ; User-Computer Interface ; Young Adult
    Language English
    Publishing date 2010-10
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't ; Research Support, U.S. Gov't, Non-P.H.S.
    ZDB-ID 740542-x
    ISSN 1879-2782 ; 0893-6080
    ISSN (online) 1879-2782
    ISSN 0893-6080
    DOI 10.1016/j.neunet.2010.06.005
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top