LIVIVO - Das Suchportal für Lebenswissenschaften

switch to English language
Erweiterte Suche

Ihre letzten Suchen

  1. AU="Han, Junhyek"
  2. AU="Muneoka, Yusuke"
  3. AU="Griggs, Lisa"
  4. AU="Klauck, Sabine M"
  5. AU="Turton, James A"
  6. AU="Patel, Abhijit A"
  7. AU="Shankowsky, Heather A"
  8. AU="Płóciennik, Przemysław"
  9. AU="Marchesi, Pietro"
  10. AU="Kim Je Hyoung"
  11. AU="Huber, Ingrid"
  12. AU="Hasuko, K."
  13. AU="Yao, Weigen"
  14. AU="Huang, Xiao-Fan"
  15. AU=Zuo Chuantian
  16. AU="Varchetta, Veronica"
  17. AU="Zhang, Lingye"
  18. AU="Venko, Katja"
  19. AU="Kasthuri, Thirupathi"
  20. AU="Pirtskhalava, Tamar"
  21. AU="Saridakis, E N"
  22. AU="Vithana, Eranga N"
  23. AU="Suárez-Lledó, M"
  24. AU="Olivo-Marston, Susan"
  25. AU="Denise P Momesso"
  26. AU="Obrecht-Sturm, Denise"

Suchergebnis

Treffer 1 - 2 von insgesamt 2

Suchoptionen

  1. Buch ; Online: Pre- and post-contact policy decomposition for non-prehensile manipulation with zero-shot sim-to-real transfer

    Kim, Minchan / Han, Junhyek / Kim, Jaehyung / Kim, Beomjoon

    2023  

    Abstract: We present a system for non-prehensile manipulation that require a significant number of contact mode transitions and the use of environmental contacts to successfully manipulate an object to a target location. Our method is based on deep reinforcement ... ...

    Abstract We present a system for non-prehensile manipulation that require a significant number of contact mode transitions and the use of environmental contacts to successfully manipulate an object to a target location. Our method is based on deep reinforcement learning which, unlike state-of-the-art planning algorithms, does not require apriori knowledge of the physical parameters of the object or environment such as friction coefficients or centers of mass. The planning time is reduced to the simple feed-forward prediction time on a neural network. We propose a computational structure, action space design, and curriculum learning scheme that facilitates efficient exploration and sim-to-real transfer. In challenging real-world non-prehensile manipulation tasks, we show that our method can generalize over different objects, and succeed even for novel objects not seen during training. Project website: https://sites.google.com/view/nonprenehsile-decomposition

    Comment: Accepted to the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
    Schlagwörter Computer Science - Robotics
    Thema/Rubrik (Code) 629
    Erscheinungsdatum 2023-09-06
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

  2. Buch ; Online: Open X-Embodiment

    Collaboration, Open X-Embodiment / Padalkar, Abhishek / Pooley, Acorn / Mandlekar, Ajay / Jain, Ajinkya / Tung, Albert / Bewley, Alex / Herzog, Alex / Irpan, Alex / Khazatsky, Alexander / Rai, Anant / Singh, Anikait / Garg, Animesh / Brohan, Anthony / Raffin, Antonin / Wahid, Ayzaan / Burgess-Limerick, Ben / Kim, Beomjoon / Schölkopf, Bernhard /
    Ichter, Brian / Lu, Cewu / Xu, Charles / Finn, Chelsea / Xu, Chenfeng / Chi, Cheng / Huang, Chenguang / Chan, Christine / Pan, Chuer / Fu, Chuyuan / Devin, Coline / Driess, Danny / Pathak, Deepak / Shah, Dhruv / Büchler, Dieter / Kalashnikov, Dmitry / Sadigh, Dorsa / Johns, Edward / Ceola, Federico / Xia, Fei / Stulp, Freek / Zhou, Gaoyue / Sukhatme, Gaurav S. / Salhotra, Gautam / Yan, Ge / Schiavi, Giulio / Kahn, Gregory / Su, Hao / Fang, Hao-Shu / Shi, Haochen / Amor, Heni Ben / Christensen, Henrik I / Furuta, Hiroki / Walke, Homer / Fang, Hongjie / Mordatch, Igor / Radosavovic, Ilija / Leal, Isabel / Liang, Jacky / Abou-Chakra, Jad / Kim, Jaehyung / Peters, Jan / Schneider, Jan / Hsu, Jasmine / Bohg, Jeannette / Bingham, Jeffrey / Wu, Jiajun / Wu, Jialin / Luo, Jianlan / Gu, Jiayuan / Tan, Jie / Oh, Jihoon / Malik, Jitendra / Booher, Jonathan / Tompson, Jonathan / Yang, Jonathan / Lim, Joseph J. / Silvério, João / Han, Junhyek / Rao, Kanishka / Pertsch, Karl / Hausman, Karol / Go, Keegan / Gopalakrishnan, Keerthana / Goldberg, Ken / Byrne, Kendra / Oslund, Kenneth / Kawaharazuka, Kento / Zhang, Kevin / Rana, Krishan / Srinivasan, Krishnan / Chen, Lawrence Yunliang / Pinto, Lerrel / Fei-Fei, Li / Tan, Liam / Ott, Lionel / Lee, Lisa / Tomizuka, Masayoshi / Spero, Max / Du, Maximilian / Ahn, Michael / Zhang, Mingtong / Ding, Mingyu / Srirama, Mohan Kumar / Sharma, Mohit / Kim, Moo Jin / Kanazawa, Naoaki / Hansen, Nicklas / Heess, Nicolas / Joshi, Nikhil J / Suenderhauf, Niko / Di Palo, Norman / Shafiullah, Nur Muhammad Mahi / Mees, Oier / Kroemer, Oliver / Sanketi, Pannag R / Wohlhart, Paul / Xu, Peng / Sermanet, Pierre / Sundaresan, Priya / Vuong, Quan / Rafailov, Rafael / Tian, Ran / Doshi, Ria / Martín-Martín, Roberto / Mendonca, Russell / Shah, Rutav / Hoque, Ryan / Julian, Ryan / Bustamante, Samuel / Kirmani, Sean / Levine, Sergey / Moore, Sherry / Bahl, Shikhar / Dass, Shivin / Sonawani, Shubham / Song, Shuran / Xu, Sichun / Haldar, Siddhant / Adebola, Simeon / Guist, Simon / Nasiriany, Soroush / Schaal, Stefan / Welker, Stefan / Tian, Stephen / Dasari, Sudeep / Belkhale, Suneel / Osa, Takayuki / Harada, Tatsuya / Matsushima, Tatsuya / Xiao, Ted / Yu, Tianhe / Ding, Tianli / Davchev, Todor / Zhao, Tony Z. / Armstrong, Travis / Darrell, Trevor / Jain, Vidhi / Vanhoucke, Vincent / Zhan, Wei / Zhou, Wenxuan / Burgard, Wolfram / Chen, Xi / Wang, Xiaolong / Zhu, Xinghao / Li, Xuanlin / Lu, Yao / Chebotar, Yevgen / Zhou, Yifan / Zhu, Yifeng / Xu, Ying / Wang, Yixuan / Bisk, Yonatan / Cho, Yoonyoung / Lee, Youngwoon / Cui, Yuchen / Wu, Yueh-Hua / Tang, Yujin / Zhu, Yuke / Li, Yunzhu / Iwasawa, Yusuke / Matsuo, Yutaka / Xu, Zhuo / Cui, Zichen Jeff

    Robotic Learning Datasets and RT-X Models

    2023  

    Abstract: Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained ...

    Abstract Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website $\href{https://robotics-transformer-x.github.io}{\text{robotics-transformer-x.github.io}}$.
    Schlagwörter Computer Science - Robotics
    Thema/Rubrik (Code) 629
    Erscheinungsdatum 2023-10-13
    Erscheinungsland us
    Dokumenttyp Buch ; Online
    Datenquelle BASE - Bielefeld Academic Search Engine (Lebenswissenschaftliche Auswahl)

    Zusatzmaterialien

    Kategorien

Zum Seitenanfang