LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 22

Search options

  1. Article ; Online: Real-Time Omnidirectional Stereo Rendering: Generating 360° Surround-View Panoramic Images for Comfortable Immersive Viewing.

    Marrinan, Thomas / Papka, Michael E

    IEEE transactions on visualization and computer graphics

    2021  Volume 27, Issue 5, Page(s) 2587–2596

    Abstract: Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and ... ...

    Abstract Surround-view panoramic images and videos have become a popular form of media for interactive viewing on mobile devices and virtual reality headsets. Viewing such media provides a sense of immersion by allowing users to control their view direction and experience an entire environment. When using a virtual reality headset, the level of immersion can be improved by leveraging stereoscopic capabilities. Stereoscopic images are generated in pairs, one for the left eye and one for the right eye, and result in providing an important depth cue for the human visual system. For computer generated imagery, rendering proper stereo pairs is well known for a fixed view. However, it is much more difficult to create omnidirectional stereo pairs for a surround-view projection that work well when looking in any direction. One major drawback of traditional omnidirectional stereo images is that they suffer from binocular misalignment in the peripheral vision as a user's view direction approaches the zenith / nadir (north / south pole) of the projection sphere. This paper presents a real-time geometry-based approach for omnidirectional stereo rendering that fits into the standard rendering pipeline. Our approach includes tunable parameters that enable pole merging - a reduction in the stereo effect near the poles that can minimize binocular misalignment. Results from a user study indicate that pole merging reduces visual fatigue and discomfort associated with binocular misalignment without inhibiting depth perception.
    MeSH term(s) Algorithms ; Computer Graphics ; Imaging, Three-Dimensional/methods ; Photogrammetry/methods ; Virtual Reality
    Language English
    Publishing date 2021-04-15
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2021.3067780
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: Color Maker

    Salvi, Amey / Lu, Kecheng / Papka, Michael E. / Wang, Yunhai / Reda, Khairi

    a Mixed-Initiative Approach to Creating Accessible Color Maps

    2024  

    Abstract: Quantitative data is frequently represented using color, yet designing effective color mappings is a challenging task, requiring one to balance perceptual standards with personal color preference. Current design tools either overwhelm novices with ... ...

    Abstract Quantitative data is frequently represented using color, yet designing effective color mappings is a challenging task, requiring one to balance perceptual standards with personal color preference. Current design tools either overwhelm novices with complexity or offer limited customization options. We present ColorMaker, a mixed-initiative approach for creating colormaps. ColorMaker combines fluid user interaction with real-time optimization to generate smooth, continuous color ramps. Users specify their loose color preferences while leaving the algorithm to generate precise color sequences, meeting both designer needs and established guidelines. ColorMaker can create new colormaps, including designs accessible for people with color-vision deficiencies, starting from scratch or with only partial input, thus supporting ideation and iterative refinement. We show that our approach can generate designs with similar or superior perceptual characteristics to standard colormaps. A user study demonstrates how designers of varying skill levels can use this tool to create custom, high-quality colormaps. ColorMaker is available at https://colormaker.org

    Comment: To appear at the ACM CHI '24 Conference on Human Factors in Computing Systems
    Keywords Computer Science - Human-Computer Interaction
    Subject code 004
    Publishing date 2024-01-26
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Article: The U.S. High-Performance Computing Consortium in the Fight Against COVID-19

    Hack, James J. / Papka, Michael E.

    Computing in Science & Engineering

    Abstract: U S computing leaders, including Department of Energy National Laboratories, have partnered with universities, government agencies, and the private sector to research responses to COVID-19, providing an unprecedented collection of resources that include ... ...

    Abstract U S computing leaders, including Department of Energy National Laboratories, have partnered with universities, government agencies, and the private sector to research responses to COVID-19, providing an unprecedented collection of resources that include some of the fastest computers in the world For HPC users, these leadership machines will drive the AI to accelerate the discovery of promising treatments, enable at-scale simulations to understand the virus's protein structure and attack mechanisms, and help inform policymakers to deploy resources effectively
    Keywords covid19
    Publisher WHO
    Document type Article
    Note WHO #Covidence: #867976
    Database COVID19

    Kategorien

  4. Book ; Online: Distributed Neural Representation for Reactive in situ Visualization

    Wu, Qi / Insley, Joseph A. / Mateevitsi, Victor A. / Rizzi, Silvio / Papka, Michael E. / Ma, Kwan-Liu

    2023  

    Abstract: In situ visualization and steering of computational modeling can be effectively achieved using reactive programming, which leverages temporal abstraction and data caching mechanisms to create dynamic workflows. However, implementing a temporal cache for ... ...

    Abstract In situ visualization and steering of computational modeling can be effectively achieved using reactive programming, which leverages temporal abstraction and data caching mechanisms to create dynamic workflows. However, implementing a temporal cache for large-scale simulations can be challenging. Implicit neural networks have proven effective in compressing large volume data. However, their application to distributed data has yet to be fully explored. In this work, we develop an implicit neural representation for distributed volume data and incorporate it into the DIVA reactive programming system. This implementation enables us to build an in situ temporal caching system with a capacity 100 times larger than previously achieved. We integrate our implementation into the Ascent infrastructure and evaluate its performance using real-world simulations.
    Keywords Computer Science - Distributed ; Parallel ; and Cluster Computing ; Computer Science - Artificial Intelligence
    Publishing date 2023-03-27
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: A Multi-Level, Multi-Scale Visual Analytics Approach to Assessment of Multifidelity HPC Systems

    Shilpika / Lusch, Bethany / Emani, Murali / Simini, Filippo / Vishwanath, Venkatram / Papka, Michael E. / Ma, Kwan-Liu

    2023  

    Abstract: The ability to monitor and interpret of hardware system events and behaviors are crucial to improving the robustness and reliability of these systems, especially in a supercomputing facility. The growing complexity and scale of these systems demand an ... ...

    Abstract The ability to monitor and interpret of hardware system events and behaviors are crucial to improving the robustness and reliability of these systems, especially in a supercomputing facility. The growing complexity and scale of these systems demand an increase in monitoring data collected at multiple fidelity levels and varying temporal resolutions. In this work, we aim to build a holistic analytical system that helps make sense of such massive data, mainly the hardware logs, job logs, and environment logs collected from disparate subsystems and components of a supercomputer system. This end-to-end log analysis system, coupled with visual analytics support, allows users to glean and promptly extract supercomputer usage and error patterns at varying temporal and spatial resolutions. We use multiresolution dynamic mode decomposition (mrDMD), a technique that depicts high-dimensional data as correlated spatial-temporal variations patterns or modes, to extract variation patterns isolated at specified frequencies. Our improvements to the mrDMD algorithm help promptly reveal useful information in the massive environment log dataset, which is then associated with the processed hardware and job log datasets using our visual analytics system. Furthermore, our system can identify the usage and error patterns filtered at user, project, and subcomponent levels. We exemplify the effectiveness of our approach with two use scenarios with the Cray XC40 supercomputer.
    Keywords Computer Science - Human-Computer Interaction ; Computer Science - Computer Vision and Pattern Recognition
    Subject code 006
    Publishing date 2023-06-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators

    Emani, Murali / Foreman, Sam / Sastry, Varuni / Xie, Zhen / Raskar, Siddhisanket / Arnold, William / Thakur, Rajeev / Vishwanath, Venkatram / Papka, Michael E.

    2023  

    Abstract: Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of ...

    Abstract Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on large language models has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT- 2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models' performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps.
    Keywords Computer Science - Performance ; Computer Science - Artificial Intelligence ; Computer Science - Hardware Architecture ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-10-06
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Article ; Online: Measuring Cities with Software-Defined Sensors

    Catlett Charlie / Beckman Pete / Ferrier Nicola / Nusbaum Howard / Papka Michael E. / Berman Marc G. / Sankaran Rajesh

    Journal of Social Computing, Vol 1, Iss 1, Pp 14-

    2020  Volume 27

    Abstract: The Chicago Array of Things (AoT) project, funded by the US National Science Foundation, created an experimental, urban-scale measurement capability to support diverse scientific studies. Initially conceived as a traditional sensor network, ... ...

    Abstract The Chicago Array of Things (AoT) project, funded by the US National Science Foundation, created an experimental, urban-scale measurement capability to support diverse scientific studies. Initially conceived as a traditional sensor network, collaborations with many science communities guided the project to design a system that is remotely programmable to implement Artificial Intelligence (AI) within the devices—at the "edge" of the network—as a means for measuring urban factors that heretofore had only been possible with human observers, such as human behavior including social interaction. The concept of "software-defined sensors" emerged from these design discussions, opening new possibilities, such as stronger privacy protections and autonomous, adaptive measurements triggered by events or conditions. We provide examples of current and planned social and behavioral science investigations uniquely enabled by software-defined sensors as part of the SAGE project, an expanded follow-on effort that includes AoT.
    Keywords sensors ; edge computing ; computer vision ; urban science ; Electronic computers. Computer science ; QA75.5-76.95 ; Social sciences (General) ; H1-99
    Subject code 501
    Language English
    Publishing date 2020-09-01T00:00:00Z
    Publisher Tsinghua University Press
    Document type Article ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article ; Online: Linking scientific instruments and computation: Patterns, technologies, and experiences.

    Vescovi, Rafael / Chard, Ryan / Saint, Nickolaus D / Blaiszik, Ben / Pruyne, Jim / Bicer, Tekin / Lavens, Alex / Liu, Zhengchun / Papka, Michael E / Narayanan, Suresh / Schwarz, Nicholas / Chard, Kyle / Foster, Ian T

    Patterns (New York, N.Y.)

    2022  Volume 3, Issue 10, Page(s) 100606

    Abstract: Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data ...

    Abstract Powerful detectors at modern experimental facilities routinely collect data at multiple GB/s. Online analysis methods are needed to enable the collection of only interesting subsets of such massive data streams, such as by explicitly discarding some data elements or by directing instruments to relevant areas of experimental space. Thus, methods are required for configuring and running distributed computing pipelines-what we call flows-that link instruments, computers (e.g., for analysis, simulation, artificial intelligence [AI] model training), edge computing (e.g., for analysis), data stores, metadata catalogs, and high-speed networks. We review common patterns associated with such flows and describe methods for instantiating these patterns. We present experiences with the application of these methods to the processing of data from five different scientific instruments, each of which engages powerful computers for data inversion,model training, or other purposes. We also discuss implications of such methods for operators and users of scientific facilities.
    Language English
    Publishing date 2022-10-14
    Publishing country United States
    Document type Journal Article
    ISSN 2666-3899
    ISSN (online) 2666-3899
    DOI 10.1016/j.patter.2022.100606
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Book ; Online: Scaling Computational Fluid Dynamics

    Mateevitsi, Victor A. / Bode, Mathis / Ferrier, Nicola / Fischer, Paul / Göbbert, Jens Henrik / Insley, Joseph A. / Lan, Yu-Hsiang / Min, Misun / Papka, Michael E. / Patel, Saumil / Rizzi, Silvio / Windgassen, Jonathan

    In Situ Visualization of NekRS using SENSEI

    2023  

    Abstract: In the realm of Computational Fluid Dynamics (CFD), the demand for memory and computation resources is extreme, necessitating the use of leadership-scale computing platforms for practical domain sizes. This intensive requirement renders traditional ... ...

    Abstract In the realm of Computational Fluid Dynamics (CFD), the demand for memory and computation resources is extreme, necessitating the use of leadership-scale computing platforms for practical domain sizes. This intensive requirement renders traditional checkpointing methods ineffective due to the significant slowdown in simulations while saving state data to disk. As we progress towards exascale and GPU-driven High-Performance Computing (HPC) and confront larger problem sizes, the choice becomes increasingly stark: to compromise data fidelity or to reduce resolution. To navigate this challenge, this study advocates for the use of in situ analysis and visualization techniques. These allow more frequent data "snapshots" to be taken directly from memory, thus avoiding the need for disruptive checkpointing. We detail our approach of instrumenting NekRS, a GPU-focused thermal-fluid simulation code employing the spectral element method (SEM), and describe varied in situ and in transit strategies for data rendering. Additionally, we provide concrete scientific use-cases and report on runs performed on Polaris, Argonne Leadership Computing Facility's (ALCF) 44 Petaflop supercomputer and J\"ulich Wizard for European Leadership Science (JUWELS) Booster, J\"ulich Supercomputing Centre's (JSC) 71 Petaflop High Performance Computing (HPC) system, offering practical insight into the implications of our methodology.
    Keywords Computer Science - Distributed ; Parallel ; and Cluster Computing ; Computer Science - Performance
    Subject code 000
    Publishing date 2023-12-15
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: Balsam

    Salim, Michael A. / Uram, Thomas D. / Childers, J. Taylor / Balaprakash, Prasanna / Vishwanath, Venkatram / Papka, Michael E.

    Automated Scheduling and Execution of Dynamic, Data-Intensive HPC Workflows

    2019  

    Abstract: We introduce the Balsam service to manage high-throughput task scheduling and execution on supercomputing systems. Balsam allows users to populate a task database with a variety of tasks ranging from simple independent tasks to dynamic multi-task ... ...

    Abstract We introduce the Balsam service to manage high-throughput task scheduling and execution on supercomputing systems. Balsam allows users to populate a task database with a variety of tasks ranging from simple independent tasks to dynamic multi-task workflows. With abstractions for the local resource scheduler and MPI environment, Balsam dynamically packages tasks into ensemble jobs and manages their scheduling lifecycle. The ensembles execute in a pilot "launcher" which (i) ensures concurrent, load-balanced execution of arbitrary serial and parallel programs with heterogeneous processor requirements, (ii) requires no modification of user applications, (iii) is tolerant of task-level faults and provides several options for error recovery, (iv) stores provenance data (e.g task history, error logs) in the database, (v) supports dynamic workflows, in which tasks are created or killed at runtime. Here, we present the design and Python implementation of the Balsam service and launcher. The efficacy of this system is illustrated using two case studies: hyperparameter optimization of deep neural networks, and high-throughput single-point quantum chemistry calculations. We find that the unique combination of flexible job-packing and automated scheduling with dynamic (pilot-managed) execution facilitates excellent resource utilization. The scripting overheads typically needed to manage resources and launch workflows on supercomputers are substantially reduced, accelerating workflow development and execution.

    Comment: SC '18: 8th Workshop on Python for High-Performance and Scientific Computing (PyHPC 2018)
    Keywords Computer Science - Distributed ; Parallel ; and Cluster Computing
    Subject code 004
    Publishing date 2019-09-18
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top