LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 167

Search options

  1. Article ; Online: Pushing Visualization Research Frontiers: Essential Topics Not Addressed by Machine Learning.

    Ma, Kwan-Liu / Rhyne, Theresa-Marie

    IEEE computer graphics and applications

    2023  Volume 43, Issue 1, Page(s) 97–102

    Abstract: Unsurprisingly, we have observed tremendous interests and efforts in the application of machine learning (ML) to many data visualization problems, which are having success and leading to new capabilities. However, there is a space in visualization ... ...

    Abstract Unsurprisingly, we have observed tremendous interests and efforts in the application of machine learning (ML) to many data visualization problems, which are having success and leading to new capabilities. However, there is a space in visualization research that is either completely or partly agnostic to ML that should not be lost in this current VIS+ML movement. The research that this space can offer is imperative to the growth of our field and it is important that we remind ourselves to invest in this research as well as show what it could bear. This Viewpoints article provides my personal take on a few research challenges and opportunities that lie ahead that may not be directly addressable by ML.
    Language English
    Publishing date 2023-04-06
    Publishing country United States
    Document type Journal Article
    ISSN 1558-1756
    ISSN (online) 1558-1756
    DOI 10.1109/MCG.2022.3225692
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Article ; Online: Communicating Uncertainty and Risk in Air Quality Maps.

    Preston, Annie / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2023  Volume 29, Issue 9, Page(s) 3746–3757

    Abstract: Environmental sensors provide crucial data for understanding our surroundings. For example, air quality maps based on sensor readings help users make decisions to mitigate the effects of pollution on their health. Standard maps show readings from ... ...

    Abstract Environmental sensors provide crucial data for understanding our surroundings. For example, air quality maps based on sensor readings help users make decisions to mitigate the effects of pollution on their health. Standard maps show readings from individual sensors or colored contours indicating estimated pollution levels. However, showing a single estimate may conceal uncertainty and lead to underestimation of risk, while showing sensor data yields varied interpretations. We present several visualizations of uncertainty in air quality maps, including a frequency-framing "dotmap" and small multiples, and we compare them with standard contour and sensor-based maps. In a user study, we find that including uncertainty in maps has a significant effect on how much users would choose to reduce physical activity, and that people make more cautious decisions when using uncertainty-aware maps. Additionally, we analyze think-aloud transcriptions from the experiment to understand more about how the representation of uncertainty influences people's decision-making. Our results suggest ways to design maps of sensor data that can encourage certain types of reasoning, yield more consistent responses, and convey risk better than standard maps.
    Language English
    Publishing date 2023-08-01
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2022.3171443
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Article ; Online: Photon Field Networks for Dynamic Real-Time Volumetric Global Illumination.

    Bauer, David / Wu, Qi / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2023  Volume 30, Issue 1, Page(s) 975–985

    Abstract: Volume data is commonly found in many scientific disciplines, like medicine, physics, and biology. Experts rely on robust scientific visualization techniques to extract valuable insights from the data. Recent years have shown path tracing to be the ... ...

    Abstract Volume data is commonly found in many scientific disciplines, like medicine, physics, and biology. Experts rely on robust scientific visualization techniques to extract valuable insights from the data. Recent years have shown path tracing to be the preferred approach for volumetric rendering, given its high levels of realism. However, real-time volumetric path tracing often suffers from stochastic noise and long convergence times, limiting interactive exploration. In this paper, we present a novel method to enable real-time global illumination for volume data visualization. We develop Photon Field Networks-a phase-function-aware, multi-light neural representation of indirect volumetric global illumination. The fields are trained on multi-phase photon caches that we compute a priori. Training can be done within seconds, after which the fields can be used in various rendering tasks. To showcase their potential, we develop a custom neural path tracer, with which our photon fields achieve interactive framerates even on large datasets. We conduct in-depth evaluations of the method's performance, including visual quality, stochastic noise, inference and rendering speeds, and accuracy regarding illumination and phase function awareness. Results are compared to ray marching, path tracing and photon mapping. Our findings show that Photon Field Networks can faithfully represent indirect global illumination within the boundaries of the trained phase spectrum while exhibiting less stochastic noise and rendering at a significantly faster rate than traditional methods.
    Language English
    Publishing date 2023-12-27
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2023.3327107
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  4. Article ; Online: Character-Oriented Design for Visual Data Storytelling.

    Dasu, Keshav / Kuo, Yun-Hsin / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2023  Volume 30, Issue 1, Page(s) 98–108

    Abstract: When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must ...

    Abstract When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story captivating, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of "character-oriented design". We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate "character-oriented design" by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://chaorientdesignds.github.io/.
    Language English
    Publishing date 2023-12-25
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2023.3326578
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Book ; Online: A Visual Analytics Design for Connecting Healthcare Team Communication to Patient Outcomes

    Lu, Hsiao-Ying / Li, Yiran / Ma, Kwan-Liu

    2024  

    Abstract: Communication among healthcare professionals (HCPs) is crucial for the quality of patient treatment. Surrounding each patient's treatment, communication among HCPs can be examined as temporal networks, constructed from Electronic Health Record (EHR) ... ...

    Abstract Communication among healthcare professionals (HCPs) is crucial for the quality of patient treatment. Surrounding each patient's treatment, communication among HCPs can be examined as temporal networks, constructed from Electronic Health Record (EHR) access logs. This paper introduces a visual analytics system designed to study the effectiveness and efficiency of temporal communication networks mediated by the EHR system. We present a method that associates network measures with patient survival outcomes and devises effectiveness metrics based on these associations. To analyze communication efficiency, we extract the latencies and frequencies of EHR accesses. Our visual analytics system is designed to assist in inspecting and understanding the composed communication effectiveness metrics and to enable the exploration of communication efficiency by encoding latencies and frequencies in an information flow diagram. We demonstrate and evaluate our system through multiple case studies and an expert review.
    Keywords Computer Science - Social and Information Networks ; Computer Science - Human-Computer Interaction ; Computer Science - Machine Learning
    Publishing date 2024-01-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Article ; Online: P6: A Declarative Language for Integrating Machine Learning in Visual Analytics.

    Li, Jianping Kelvin / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2021  Volume 27, Issue 2, Page(s) 380–389

    Abstract: We present P6, a declarative language for building high performance visual analytics systems through its support for specifying and integrating machine learning and interactive visualization methods. As data analysis methods based on machine learning and ...

    Abstract We present P6, a declarative language for building high performance visual analytics systems through its support for specifying and integrating machine learning and interactive visualization methods. As data analysis methods based on machine learning and artificial intelligence continue to advance, a visual analytics solution can leverage these methods for better exploiting large and complex data. However, integrating machine learning methods with interactive visual analysis is challenging. Existing declarative programming libraries and toolkits for visualization lack support for coupling machine learning methods. By providing a declarative language for visual analytics, P6 can empower more developers to create visual analytics applications that combine machine learning and visualization methods for data analysis and problem solving. Through a variety of example applications, we demonstrate P6's capabilities and show the benefits of using declarative specifications to build visual analytics systems. We also identify and discuss the research opportunities and challenges for declarative visual analytics.
    Keywords covid19
    Language English
    Publishing date 2021-01-28
    Publishing country United States
    Document type Journal Article ; Research Support, U.S. Gov't, Non-P.H.S.
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2020.3030453
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  7. Article ; Online: Network Comparison with Interpretable Contrastive Network Representation Learning.

    Fujiwara, Takanori / Zhao, Jian / Chen, Francine / Yu, Yaoliang / Ma, Kwan-Liu

    Journal of data science, statistics, and visualisation

    2024  Volume 2, Issue 5

    Abstract: Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of ... ...

    Abstract Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of interactions in cancer tissues. This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another. However, existing contrastive learning methods cannot be directly applied to networks as they are designed only for high-dimensional data analysis. To address this problem, we introduce a new analysis approach called
    Language English
    Publishing date 2024-01-22
    Publishing country Netherlands
    Document type Journal Article
    ISSN 2773-0689
    ISSN (online) 2773-0689
    DOI 10.52933/jdssv.v2i5.56
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  8. Article ; Online: Interactive Volume Visualization Via Multi-Resolution Hash Encoding Based Neural Representation.

    Wu, Qi / Bauer, David / Doyle, Michael J / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2023  Volume PP

    Abstract: Implicit neural networks have demonstrated immense potential in compressing volume data for visualization. However, despite their advantages, the high costs of training and inference have thus far limited their application to offline data processing and ... ...

    Abstract Implicit neural networks have demonstrated immense potential in compressing volume data for visualization. However, despite their advantages, the high costs of training and inference have thus far limited their application to offline data processing and non-interactive rendering. In this paper, we present a novel solution that leverages modern GPU tensor cores, a well-implemented CUDA machine learning framework, an optimized global-illumination-capable volume rendering algorithm, and a suitable acceleration data structure to enable real-time direct ray tracing of volumetric neural representations. Our approach produces high-fidelity neural representations with a peak signal-to-noise ratio (PSNR) exceeding 30 dB, while reducing their size by up to three orders of magnitude. Remarkably, we show that the entire training step can fit within a rendering loop, bypassing the need for pre-training. Additionally, we introduce an efficient out-of-core training strategy to support extreme-scale volume data, making it possible for our volumetric neural representation training to scale up to terascale on a workstation with an NVIDIA RTX 3090 GPU. Our method significantly outperforms state-of-the-art techniques in terms of training time, reconstruction quality, and rendering performance, making it an ideal choice for applications where fast and accurate visualization of large-scale volume data is paramount.
    Language English
    Publishing date 2023-07-07
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2023.3293121
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Article ; Online: A Deep Generative Model for Reordering Adjacency Matrices.

    Kwon, Oh-Hyun / Kao, Chiun-How / Chen, Chun-Houh / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2023  Volume 29, Issue 7, Page(s) 3195–3208

    Abstract: Depending on the node ordering, an adjacency matrix can highlight distinct characteristics of a graph. Deriving a "proper" node ordering is thus a critical step in visualizing a graph as an adjacency matrix. Users often try multiple matrix reorderings ... ...

    Abstract Depending on the node ordering, an adjacency matrix can highlight distinct characteristics of a graph. Deriving a "proper" node ordering is thus a critical step in visualizing a graph as an adjacency matrix. Users often try multiple matrix reorderings using different methods until they find one that meets the analysis goal. However, this trial-and-error approach is laborious and disorganized, which is especially challenging for novices. This paper presents a technique that enables users to effortlessly find a matrix reordering they want. Specifically, we design a generative model that learns a latent space of diverse matrix reorderings of the given graph. We also construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings. We demonstrate our approach through quantitative and qualitative evaluations of the generated reorderings and learned latent spaces. The results show that our model is capable of learning a latent space of diverse matrix reorderings. Most existing research in this area generally focused on developing algorithms that can compute "better" matrix reorderings for particular circumstances. This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph.
    Language English
    Publishing date 2023-05-26
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2022.3153838
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  10. Article ; Online: FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks.

    Bauer, David / Wu, Qi / Ma, Kwan-Liu

    IEEE transactions on visualization and computer graphics

    2022  Volume 29, Issue 1, Page(s) 515–525

    Abstract: Volume data is found in many important scientific and engineering applications. Rendering this data for visualization at high quality and interactive rates for demanding applications such as virtual reality is still not easily achievable even using ... ...

    Abstract Volume data is found in many important scientific and engineering applications. Rendering this data for visualization at high quality and interactive rates for demanding applications such as virtual reality is still not easily achievable even using professional-grade hardware. We introduce FoVolNet-a method to significantly increase the performance of volume data visualization. We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network. Foveated rendering is a technique that prioritizes rendering computations around the user's focal point. This approach leverages properties of the human visual system, thereby saving computational resources when rendering data in the periphery of the user's field of vision. Our reconstruction network combines direct and kernel prediction methods to produce fast, stable, and perceptually convincing output. With a slim design and the use of quantization, our method outperforms state-of-the-art neural reconstruction techniques in both end-to-end frame times and visual quality. We conduct extensive evaluations of the system's rendering performance, inference speed, and perceptual properties, and we provide comparisons to competing neural image reconstruction techniques. Our test results show that FoVolNet consistently achieves significant time saving over conventional rendering while preserving perceptual quality.
    Language English
    Publishing date 2022-12-19
    Publishing country United States
    Document type Journal Article
    ISSN 1941-0506
    ISSN (online) 1941-0506
    DOI 10.1109/TVCG.2022.3209498
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top