LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 5 of total 5

Search options

  1. Book ; Online: High Definition, Inexpensive, Underwater Mapping

    Joshi, Bharat / Xanthidis, Marios / Rahman, Sharmin / Rekleitis, Ioannis

    2022  

    Abstract: In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive sensor. Over the recent years, imaging technology of action cameras is producing stunning results even under the challenging conditions of the underwater ... ...

    Abstract In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive sensor. Over the recent years, imaging technology of action cameras is producing stunning results even under the challenging conditions of the underwater domain. The GoPro 9 camera provides high definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in a single mp4 file. The visual inertial SLAM framework is augmented to adjust the map after each loop closure. Data collected at an artificial wreck of the coast of South Carolina and in caverns and caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions.

    Comment: IEEE Internation Conference on Robotics and Automation, 2022
    Keywords Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics
    Publishing date 2022-03-10
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  2. Book ; Online: AquaVis

    Xanthidis, Marios / Kalaitzakis, Michail / Karapetyan, Nare / Johnson, James / Vitzilaios, Nikolaos / O'Kane, Jason M. / Rekleitis, Ioannis

    A Perception-Aware Autonomous Navigation Framework for Underwater Vehicles

    2021  

    Abstract: Visual monitoring operations underwater require both observing the objects of interest in close-proximity, and tracking the few feature-rich areas necessary for state estimation.This paper introduces the first navigation framework, called AquaVis, that ... ...

    Abstract Visual monitoring operations underwater require both observing the objects of interest in close-proximity, and tracking the few feature-rich areas necessary for state estimation.This paper introduces the first navigation framework, called AquaVis, that produces on-line visibility-aware motion plans that enable Autonomous Underwater Vehicles (AUVs) to track multiple visual objectives with an arbitrary camera configuration in real-time. Using the proposed pipeline, AUVs can efficiently move in 3D, reach their goals while avoiding obstacles safely, and maximizing the visibility of multiple objectives along the path within a specified proximity. The method is sufficiently fast to be executed in real-time and is suitable for single or multiple camera configurations. Experimental results show the significant improvement on tracking multiple automatically-extracted points of interest, with low computational overhead and fast re-planning times

    Comment: Presented at IROS2021
    Keywords Computer Science - Robotics
    Subject code 629
    Publishing date 2021-10-04
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: DeepURL

    Joshi, Bharat / Modasshir, Md / Manderson, Travis / Damron, Hunter / Xanthidis, Marios / Li, Alberto Quattrini / Rekleitis, Ioannis / Dudek, Gregory

    Deep Pose Estimation Framework for Underwater Relative Localization

    2020  

    Abstract: In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater ... ...

    Abstract In this paper, we propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image. A team of autonomous robots localizing themselves in a communication-constrained underwater environment is essential for many applications such as underwater exploration, mapping, multi-robot convoying, and other multi-robot tasks. Due to the profound difficulty of collecting ground truth images with accurate 6D poses underwater, this work utilizes rendered images from the Unreal Game Engine simulation for training. An image-to-image translation network is employed to bridge the gap between the rendered and the real images producing synthetic images for training. The proposed method predicts the 6D pose of an AUV from a single image as 2D image keypoints representing 8 corners of the 3D model of the AUV, and then the 6D pose in the camera coordinates is determined using RANSAC-based PnP. Experimental results in real-world underwater environments (swimming pool and ocean) with different cameras demonstrate the robustness and accuracy of the proposed technique in terms of translation error and orientation error over the state-of-the-art methods. The code is publicly available.
    Keywords Computer Science - Robotics ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 629
    Publishing date 2020-03-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Navigation in the Presence of Obstacles for an Agile Autonomous Underwater Vehicle

    Xanthidis, Marios / Karapetyan, Nare / Damron, Hunter / Rahman, Sharmin / Johnson, James / O'Connell, Allison / O'Kane, Jason M. / Rekleitis, Ioannis

    2019  

    Abstract: Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. Movement of an autonomous underwater vehicle (AUV) through a cluttered space, such as a shipwreck or a decorated cave, ...

    Abstract Navigation underwater traditionally is done by keeping a safe distance from obstacles, resulting in "fly-overs" of the area of interest. Movement of an autonomous underwater vehicle (AUV) through a cluttered space, such as a shipwreck or a decorated cave, is an extremely challenging problem that has not been addressed in the past. This paper proposes a novel navigation framework utilizing an enhanced version of Trajopt for fast 3D path-optimization planning for AUVs. A sampling-based correction procedure ensures that the planning is not constrained by local minima, enabling navigation through narrow spaces. Two different modalities are proposed: planning with a known map results in efficient trajectories through cluttered spaces; operating in an unknown environment utilizes the point cloud from the visual features detected to navigate efficiently while avoiding the detected obstacles. The proposed approach is rigorously tested, both on simulation and in-pool experiments, proven to be fast enough to enable safe real-time 3D autonomous navigation for an AUV.

    Comment: ICRA 2020
    Keywords Computer Science - Robotics
    Subject code 629
    Publishing date 2019-03-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain

    Joshi, Bharat / Rahman, Sharmin / Kalaitzakis, Michail / Cain, Brennan / Johnson, James / Xanthidis, Marios / Karapetyan, Nare / Hernandez, Alan / Li, Alberto Quattrini / Vitzilaios, Nikolaos / Rekleitis, Ioannis

    2019  

    Abstract: A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown ... ...

    Abstract A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online.
    Keywords Computer Science - Robotics
    Subject code 004
    Publishing date 2019-04-03
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top