LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 5 of total 5

Search options

  1. Article ; Online: The minimal complexity of adapting agents increases with fitness.

    Joshi, Nikhil J / Tononi, Giulio / Koch, Christof

    PLoS computational biology

    2013  Volume 9, Issue 7, Page(s) e1003111

    Abstract: What is the relationship between the complexity and the fitness of evolved organisms, whether natural or artificial? It has been asserted, primarily based on empirical data, that the complexity of plants and animals increases as their fitness within a ... ...

    Abstract What is the relationship between the complexity and the fitness of evolved organisms, whether natural or artificial? It has been asserted, primarily based on empirical data, that the complexity of plants and animals increases as their fitness within a particular environment increases via evolution by natural selection. We simulate the evolution of the brains of simple organisms living in a planar maze that they have to traverse as rapidly as possible. Their connectome evolves over 10,000s of generations. We evaluate their circuit complexity, using four information-theoretical measures, including one that emphasizes the extent to which any network is an irreducible entity. We find that their minimal complexity increases with their fitness.
    MeSH term(s) Adaptation, Physiological ; Animals ; Biological Evolution ; Selection, Genetic
    Language English
    Publishing date 2013-07-11
    Publishing country United States
    Document type Journal Article ; Research Support, Non-U.S. Gov't
    ZDB-ID 2193340-6
    ISSN 1553-7358 ; 1553-734X
    ISSN (online) 1553-7358
    ISSN 1553-734X
    DOI 10.1371/journal.pcbi.1003111
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: RoboVQA

    Sermanet, Pierre / Ding, Tianli / Zhao, Jeffrey / Xia, Fei / Dwibedi, Debidatta / Gopalakrishnan, Keerthana / Chan, Christine / Dulac-Arnold, Gabriel / Maddineni, Sharath / Joshi, Nikhil J / Florence, Pete / Han, Wei / Baruch, Robert / Lu, Yao / Mirchandani, Suvir / Xu, Peng / Sanketi, Pannag / Hausman, Karol / Shafran, Izhak /
    Ichter, Brian / Cao, Yuan

    Multimodal Long-Horizon Reasoning for Robotics

    2023  

    Abstract: We present a scalable, bottom-up and intrinsically diverse data collection scheme that can be used for high-level reasoning with long and medium horizons and that has 2.2x higher throughput compared to traditional narrow top-down step-by-step collection. ...

    Abstract We present a scalable, bottom-up and intrinsically diverse data collection scheme that can be used for high-level reasoning with long and medium horizons and that has 2.2x higher throughput compared to traditional narrow top-down step-by-step collection. We collect realistic data by performing any user requests within the entirety of 3 office buildings and using multiple robot and human embodiments. With this data, we show that models trained on all embodiments perform better than ones trained on the robot data only, even when evaluated solely on robot episodes. We find that for a fixed collection budget it is beneficial to take advantage of cheaper human collection along with robot collection. We release a large and highly diverse (29,520 unique instructions) dataset dubbed RoboVQA containing 829,502 (video, text) pairs for robotics-focused visual question answering. We also demonstrate how evaluating real robot experiments with an intervention mechanism enables performing tasks to completion, making it deployable with human oversight even if imperfect while also providing a single performance metric. We demonstrate a single video-conditioned model named RoboVQA-VideoCoCa trained on our dataset that is capable of performing a variety of grounded high-level reasoning tasks in broad realistic settings with a cognitive intervention rate 46% lower than the zero-shot state of the art visual language model (VLM) baseline and is able to guide real robots through long-horizon tasks. The performance gap with zero-shot state-of-the-art models indicates that a lot of grounded data remains to be collected for real-world deployment, emphasizing the critical need for scalable data collection approaches. Finally, we show that video VLMs significantly outperform single-image VLMs with an average error rate reduction of 19% across all VQA tasks. Data and videos available at https://robovqa.github.io
    Keywords Computer Science - Robotics
    Subject code 004
    Publishing date 2023-11-01
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: RT-1

    Brohan, Anthony / Brown, Noah / Carbajal, Justice / Chebotar, Yevgen / Dabis, Joseph / Finn, Chelsea / Gopalakrishnan, Keerthana / Hausman, Karol / Herzog, Alex / Hsu, Jasmine / Ibarz, Julian / Ichter, Brian / Irpan, Alex / Jackson, Tomas / Jesmonth, Sally / Joshi, Nikhil J / Julian, Ryan / Kalashnikov, Dmitry / Kuang, Yuheng /
    Leal, Isabel / Lee, Kuang-Huei / Levine, Sergey / Lu, Yao / Malla, Utsav / Manjunath, Deeksha / Mordatch, Igor / Nachum, Ofir / Parada, Carolina / Peralta, Jodilyn / Perez, Emily / Pertsch, Karl / Quiambao, Jornell / Rao, Kanishka / Ryoo, Michael / Salazar, Grecia / Sanketi, Pannag / Sayed, Kevin / Singh, Jaspiar / Sontakke, Sumedh / Stone, Austin / Tan, Clayton / Tran, Huong / Vanhoucke, Vincent / Vega, Steve / Vuong, Quan / Xia, Fei / Xiao, Ted / Xu, Peng / Xu, Sichun / Yu, Tianhe

    Robotics Transformer for Real-World Control at Scale

    2022  

    Abstract: By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been ...

    Abstract By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance. While this capability has been demonstrated in other fields such as computer vision, natural language processing or speech recognition, it remains to be shown in robotics, where the generalization capabilities of the models are particularly critical due to the difficulty of collecting real-world robotic data. We argue that one of the keys to the success of such general robotic models lies with open-ended task-agnostic training, combined with high-capacity architectures that can absorb all of the diverse, robotic data. In this paper, we present a model class, dubbed Robotics Transformer, that exhibits promising scalable model properties. We verify our conclusions in a study of different model classes and their ability to generalize as a function of the data size, model size, and data diversity based on a large-scale data collection on real robots performing real-world tasks. The project's website and videos can be found at robotics-transformer.github.io

    Comment: See website at robotics-transformer.github.io
    Keywords Computer Science - Robotics ; Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Machine Learning
    Subject code 004
    Publishing date 2022-12-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Do As I Can, Not As I Say

    Ahn, Michael / Brohan, Anthony / Brown, Noah / Chebotar, Yevgen / Cortes, Omar / David, Byron / Finn, Chelsea / Fu, Chuyuan / Gopalakrishnan, Keerthana / Hausman, Karol / Herzog, Alex / Ho, Daniel / Hsu, Jasmine / Ibarz, Julian / Ichter, Brian / Irpan, Alex / Jang, Eric / Ruano, Rosario Jauregui / Jeffrey, Kyle /
    Jesmonth, Sally / Joshi, Nikhil J / Julian, Ryan / Kalashnikov, Dmitry / Kuang, Yuheng / Lee, Kuang-Huei / Levine, Sergey / Lu, Yao / Luu, Linda / Parada, Carolina / Pastor, Peter / Quiambao, Jornell / Rao, Kanishka / Rettinghouse, Jarek / Reyes, Diego / Sermanet, Pierre / Sievers, Nicolas / Tan, Clayton / Toshev, Alexander / Vanhoucke, Vincent / Xia, Fei / Xiao, Ted / Xu, Peng / Xu, Sichun / Yan, Mengyuan / Zeng, Andy

    Grounding Language in Robotic Affordances

    2022  

    Abstract: Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant ... ...

    Abstract Large language models can encode a wealth of semantic knowledge about the world. Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language. However, a significant weakness of language models is that they lack real-world experience, which makes it difficult to leverage them for decision making within a given embodiment. For example, asking a language model to describe how to clean a spill might result in a reasonable narrative, but it may not be applicable to a particular agent, such as a robot, that needs to perform this task in a particular environment. We propose to provide real-world grounding by means of pretrained skills, which are used to constrain the model to propose natural language actions that are both feasible and contextually appropriate. The robot can act as the language model's "hands and eyes," while the language model supplies high-level semantic knowledge about the task. We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment. We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator. The project's website and the video can be found at https://say-can.github.io/.

    Comment: See website at https://say-can.github.io/ V1. Initial Upload. V2. Added PaLM results. Added study about new capabilities (drawer manipulation, chain of thought prompting, multilingual instructions). Added an ablation study of language model size. Added an open-source version of \algname on a simulated tabletop environment. Improved readability
    Keywords Computer Science - Robotics ; Computer Science - Computation and Language ; Computer Science - Machine Learning
    Subject code 121
    Publishing date 2022-04-04
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Open X-Embodiment

    Collaboration, Open X-Embodiment / Padalkar, Abhishek / Pooley, Acorn / Mandlekar, Ajay / Jain, Ajinkya / Tung, Albert / Bewley, Alex / Herzog, Alex / Irpan, Alex / Khazatsky, Alexander / Rai, Anant / Singh, Anikait / Garg, Animesh / Brohan, Anthony / Raffin, Antonin / Wahid, Ayzaan / Burgess-Limerick, Ben / Kim, Beomjoon / Schölkopf, Bernhard /
    Ichter, Brian / Lu, Cewu / Xu, Charles / Finn, Chelsea / Xu, Chenfeng / Chi, Cheng / Huang, Chenguang / Chan, Christine / Pan, Chuer / Fu, Chuyuan / Devin, Coline / Driess, Danny / Pathak, Deepak / Shah, Dhruv / Büchler, Dieter / Kalashnikov, Dmitry / Sadigh, Dorsa / Johns, Edward / Ceola, Federico / Xia, Fei / Stulp, Freek / Zhou, Gaoyue / Sukhatme, Gaurav S. / Salhotra, Gautam / Yan, Ge / Schiavi, Giulio / Kahn, Gregory / Su, Hao / Fang, Hao-Shu / Shi, Haochen / Amor, Heni Ben / Christensen, Henrik I / Furuta, Hiroki / Walke, Homer / Fang, Hongjie / Mordatch, Igor / Radosavovic, Ilija / Leal, Isabel / Liang, Jacky / Abou-Chakra, Jad / Kim, Jaehyung / Peters, Jan / Schneider, Jan / Hsu, Jasmine / Bohg, Jeannette / Bingham, Jeffrey / Wu, Jiajun / Wu, Jialin / Luo, Jianlan / Gu, Jiayuan / Tan, Jie / Oh, Jihoon / Malik, Jitendra / Booher, Jonathan / Tompson, Jonathan / Yang, Jonathan / Lim, Joseph J. / Silvério, João / Han, Junhyek / Rao, Kanishka / Pertsch, Karl / Hausman, Karol / Go, Keegan / Gopalakrishnan, Keerthana / Goldberg, Ken / Byrne, Kendra / Oslund, Kenneth / Kawaharazuka, Kento / Zhang, Kevin / Rana, Krishan / Srinivasan, Krishnan / Chen, Lawrence Yunliang / Pinto, Lerrel / Fei-Fei, Li / Tan, Liam / Ott, Lionel / Lee, Lisa / Tomizuka, Masayoshi / Spero, Max / Du, Maximilian / Ahn, Michael / Zhang, Mingtong / Ding, Mingyu / Srirama, Mohan Kumar / Sharma, Mohit / Kim, Moo Jin / Kanazawa, Naoaki / Hansen, Nicklas / Heess, Nicolas / Joshi, Nikhil J / Suenderhauf, Niko / Di Palo, Norman / Shafiullah, Nur Muhammad Mahi / Mees, Oier / Kroemer, Oliver / Sanketi, Pannag R / Wohlhart, Paul / Xu, Peng / Sermanet, Pierre / Sundaresan, Priya / Vuong, Quan / Rafailov, Rafael / Tian, Ran / Doshi, Ria / Martín-Martín, Roberto / Mendonca, Russell / Shah, Rutav / Hoque, Ryan / Julian, Ryan / Bustamante, Samuel / Kirmani, Sean / Levine, Sergey / Moore, Sherry / Bahl, Shikhar / Dass, Shivin / Sonawani, Shubham / Song, Shuran / Xu, Sichun / Haldar, Siddhant / Adebola, Simeon / Guist, Simon / Nasiriany, Soroush / Schaal, Stefan / Welker, Stefan / Tian, Stephen / Dasari, Sudeep / Belkhale, Suneel / Osa, Takayuki / Harada, Tatsuya / Matsushima, Tatsuya / Xiao, Ted / Yu, Tianhe / Ding, Tianli / Davchev, Todor / Zhao, Tony Z. / Armstrong, Travis / Darrell, Trevor / Jain, Vidhi / Vanhoucke, Vincent / Zhan, Wei / Zhou, Wenxuan / Burgard, Wolfram / Chen, Xi / Wang, Xiaolong / Zhu, Xinghao / Li, Xuanlin / Lu, Yao / Chebotar, Yevgen / Zhou, Yifan / Zhu, Yifeng / Xu, Ying / Wang, Yixuan / Bisk, Yonatan / Cho, Yoonyoung / Lee, Youngwoon / Cui, Yuchen / Wu, Yueh-Hua / Tang, Yujin / Zhu, Yuke / Li, Yunzhu / Iwasawa, Yusuke / Matsuo, Yutaka / Xu, Zhuo / Cui, Zichen Jeff

    Robotic Learning Datasets and RT-X Models

    2023  

    Abstract: Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained ...

    Abstract Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website $\href{https://robotics-transformer-x.github.io}{\text{robotics-transformer-x.github.io}}$.
    Keywords Computer Science - Robotics
    Subject code 629
    Publishing date 2023-10-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top