LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 4 of total 4

Search options

  1. Book ; Online: ParticleNeRF

    Abou-Chakra, Jad / Dayoub, Feras / Sünderhauf, Niko

    A Particle-Based Encoding for Online Neural Radiance Fields

    2022  

    Abstract: While existing Neural Radiance Fields (NeRFs) for dynamic scenes are offline methods with an emphasis on visual fidelity, our paper addresses the online use case that prioritises real-time adaptability. We present ParticleNeRF, a new approach that ... ...

    Abstract While existing Neural Radiance Fields (NeRFs) for dynamic scenes are offline methods with an emphasis on visual fidelity, our paper addresses the online use case that prioritises real-time adaptability. We present ParticleNeRF, a new approach that dynamically adapts to changes in the scene geometry by learning an up-to-date representation online, every 200ms. ParticleNeRF achieves this using a novel particle-based parametric encoding. We couple features to particles in space and backpropagate the photometric reconstruction loss into the particles' position gradients, which are then interpreted as velocity vectors. Governed by a lightweight physics system to handle collisions, this lets the features move freely with the changing scene geometry. We demonstrate ParticleNeRF on various dynamic scenes containing translating, rotating, articulated, and deformable objects. ParticleNeRF is the first online dynamic NeRF and achieves fast adaptability with better visual fidelity than brute-force online InstantNGP and other baseline approaches on dynamic scenes with online constraints. Videos of our system can be found at our project website https://sites.google.com/view/particlenerf.
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics
    Subject code 004
    Publishing date 2022-11-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: Implicit Object Mapping With Noisy Data

    Abou-Chakra, Jad / Dayoub, Feras / Sünderhauf, Niko

    2022  

    Abstract: Modelling individual objects as Neural Radiance Fields (NeRFs) within a robotic context can benefit many downstream tasks such as scene understanding and object manipulation. However, real-world training data collected by a robot deviate from the ideal ... ...

    Abstract Modelling individual objects as Neural Radiance Fields (NeRFs) within a robotic context can benefit many downstream tasks such as scene understanding and object manipulation. However, real-world training data collected by a robot deviate from the ideal in several key aspects. (i) The trajectories are constrained and full visual coverage is not guaranteed - especially when obstructions are present. (ii) The poses associated with the images are noisy. (iii) The objects are not easily isolated from the background. This paper addresses the above three points and uses the outputs of an object-based SLAM system to bound objects in the scene with coarse primitives and - in concert with instance masks - identify obstructions in the training images. Objects are therefore automatically bounded, and non-relevant geometry is excluded from the NeRF representation. The method's performance is benchmarked under ideal conditions and tested against errors in the poses and instance masks. Our results show that object-based NeRFs are robust to pose variations but sensitive to the quality of the instance masks.
    Keywords Computer Science - Robotics
    Subject code 004
    Publishing date 2022-04-22
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: SayPlan

    Rana, Krishan / Haviland, Jesse / Garg, Sourav / Abou-Chakra, Jad / Reid, Ian / Suenderhauf, Niko

    Grounding Large Language Models using 3D Scene Graphs for Scalable Robot Task Planning

    2023  

    Abstract: Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for ... ...

    Abstract Large language models (LLMs) have demonstrated impressive results in developing generalist planning agents for diverse tasks. However, grounding these plans in expansive, multi-floor, and multi-room environments presents a significant challenge for robotics. We introduce SayPlan, a scalable approach to LLM-based, large-scale task planning for robotics using 3D scene graph (3DSG) representations. To ensure the scalability of our approach, we: (1) exploit the hierarchical nature of 3DSGs to allow LLMs to conduct a 'semantic search' for task-relevant subgraphs from a smaller, collapsed representation of the full graph; (2) reduce the planning horizon for the LLM by integrating a classical path planner and (3) introduce an 'iterative replanning' pipeline that refines the initial plan using feedback from a scene graph simulator, correcting infeasible actions and avoiding planning failures. We evaluate our approach on two large-scale environments spanning up to 3 floors and 36 rooms with 140 assets and objects and show that our approach is capable of grounding large-scale, long-horizon task plans from abstract, and natural language instruction for a mobile manipulator robot to execute. We provide real robot video demonstrations on our project page https://sayplan.github.io.

    Comment: Accepted for oral presentation at the Conference on Robot Learning (CoRL), 2023. Project page can be found here: https://sayplan.github.io
    Keywords Computer Science - Robotics ; Computer Science - Artificial Intelligence
    Subject code 004 ; 629
    Publishing date 2023-07-12
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Open X-Embodiment

    Collaboration, Open X-Embodiment / Padalkar, Abhishek / Pooley, Acorn / Mandlekar, Ajay / Jain, Ajinkya / Tung, Albert / Bewley, Alex / Herzog, Alex / Irpan, Alex / Khazatsky, Alexander / Rai, Anant / Singh, Anikait / Garg, Animesh / Brohan, Anthony / Raffin, Antonin / Wahid, Ayzaan / Burgess-Limerick, Ben / Kim, Beomjoon / Schölkopf, Bernhard /
    Ichter, Brian / Lu, Cewu / Xu, Charles / Finn, Chelsea / Xu, Chenfeng / Chi, Cheng / Huang, Chenguang / Chan, Christine / Pan, Chuer / Fu, Chuyuan / Devin, Coline / Driess, Danny / Pathak, Deepak / Shah, Dhruv / Büchler, Dieter / Kalashnikov, Dmitry / Sadigh, Dorsa / Johns, Edward / Ceola, Federico / Xia, Fei / Stulp, Freek / Zhou, Gaoyue / Sukhatme, Gaurav S. / Salhotra, Gautam / Yan, Ge / Schiavi, Giulio / Kahn, Gregory / Su, Hao / Fang, Hao-Shu / Shi, Haochen / Amor, Heni Ben / Christensen, Henrik I / Furuta, Hiroki / Walke, Homer / Fang, Hongjie / Mordatch, Igor / Radosavovic, Ilija / Leal, Isabel / Liang, Jacky / Abou-Chakra, Jad / Kim, Jaehyung / Peters, Jan / Schneider, Jan / Hsu, Jasmine / Bohg, Jeannette / Bingham, Jeffrey / Wu, Jiajun / Wu, Jialin / Luo, Jianlan / Gu, Jiayuan / Tan, Jie / Oh, Jihoon / Malik, Jitendra / Booher, Jonathan / Tompson, Jonathan / Yang, Jonathan / Lim, Joseph J. / Silvério, João / Han, Junhyek / Rao, Kanishka / Pertsch, Karl / Hausman, Karol / Go, Keegan / Gopalakrishnan, Keerthana / Goldberg, Ken / Byrne, Kendra / Oslund, Kenneth / Kawaharazuka, Kento / Zhang, Kevin / Rana, Krishan / Srinivasan, Krishnan / Chen, Lawrence Yunliang / Pinto, Lerrel / Fei-Fei, Li / Tan, Liam / Ott, Lionel / Lee, Lisa / Tomizuka, Masayoshi / Spero, Max / Du, Maximilian / Ahn, Michael / Zhang, Mingtong / Ding, Mingyu / Srirama, Mohan Kumar / Sharma, Mohit / Kim, Moo Jin / Kanazawa, Naoaki / Hansen, Nicklas / Heess, Nicolas / Joshi, Nikhil J / Suenderhauf, Niko / Di Palo, Norman / Shafiullah, Nur Muhammad Mahi / Mees, Oier / Kroemer, Oliver / Sanketi, Pannag R / Wohlhart, Paul / Xu, Peng / Sermanet, Pierre / Sundaresan, Priya / Vuong, Quan / Rafailov, Rafael / Tian, Ran / Doshi, Ria / Martín-Martín, Roberto / Mendonca, Russell / Shah, Rutav / Hoque, Ryan / Julian, Ryan / Bustamante, Samuel / Kirmani, Sean / Levine, Sergey / Moore, Sherry / Bahl, Shikhar / Dass, Shivin / Sonawani, Shubham / Song, Shuran / Xu, Sichun / Haldar, Siddhant / Adebola, Simeon / Guist, Simon / Nasiriany, Soroush / Schaal, Stefan / Welker, Stefan / Tian, Stephen / Dasari, Sudeep / Belkhale, Suneel / Osa, Takayuki / Harada, Tatsuya / Matsushima, Tatsuya / Xiao, Ted / Yu, Tianhe / Ding, Tianli / Davchev, Todor / Zhao, Tony Z. / Armstrong, Travis / Darrell, Trevor / Jain, Vidhi / Vanhoucke, Vincent / Zhan, Wei / Zhou, Wenxuan / Burgard, Wolfram / Chen, Xi / Wang, Xiaolong / Zhu, Xinghao / Li, Xuanlin / Lu, Yao / Chebotar, Yevgen / Zhou, Yifan / Zhu, Yifeng / Xu, Ying / Wang, Yixuan / Bisk, Yonatan / Cho, Yoonyoung / Lee, Youngwoon / Cui, Yuchen / Wu, Yueh-Hua / Tang, Yujin / Zhu, Yuke / Li, Yunzhu / Iwasawa, Yusuke / Matsuo, Yutaka / Xu, Zhuo / Cui, Zichen Jeff

    Robotic Learning Datasets and RT-X Models

    2023  

    Abstract: Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained ...

    Abstract Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website $\href{https://robotics-transformer-x.github.io}{\text{robotics-transformer-x.github.io}}$.
    Keywords Computer Science - Robotics
    Subject code 629
    Publishing date 2023-10-13
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top