LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 32

Search options

  1. Article ; Online: On the forces of driver distraction: Explainable predictions for the visual demand of in-vehicle touchscreen interactions.

    Ebel, Patrick / Lingenfelder, Christoph / Vogelsang, Andreas

    Accident; analysis and prevention

    2023  Volume 183, Page(s) 106956

    Abstract: With modern infotainment systems, drivers are increasingly tempted to engage in secondary tasks while driving. Since distracted driving is already one of the main causes of fatal accidents, in-vehicle touchscreens must be as little distracting as ... ...

    Abstract With modern infotainment systems, drivers are increasingly tempted to engage in secondary tasks while driving. Since distracted driving is already one of the main causes of fatal accidents, in-vehicle touchscreens must be as little distracting as possible. To ensure that these systems are safe to use, they undergo elaborate and expensive empirical testing, requiring fully functional prototypes. Thus, early-stage methods informing designers about the implication their design may have on driver distraction are of great value. This paper presents a machine learning method that, based on anticipated usage scenarios, predicts the visual demand of in-vehicle touchscreen interactions and provides local and global explanations of the factors influencing drivers' visual attention allocation. The approach is based on large-scale natural driving data continuously collected from production line vehicles and employs the SHapley Additive exPlanation (SHAP) method to provide explanations leveraging informed design decisions. Our approach is more accurate than related work and identifies interactions during which long glances occur with 68% accuracy and predicts the total glance duration with a mean error of 2.4s. Our explanations replicate the results of various recent studies and provide fast and easily accessible insights into the effect of UI elements, driving automation, and vehicle speed on driver distraction. The system can not only help designers to evaluate current designs but also help them to better anticipate and understand the implications their design decisions might have on future designs.
    MeSH term(s) Humans ; Accidents, Traffic/prevention & control ; Automobile Driving ; Distracted Driving ; Automation
    Language English
    Publishing date 2023-01-19
    Publishing country England
    Document type Journal Article
    ZDB-ID 210223-7
    ISSN 1879-2057 ; 0001-4575
    ISSN (online) 1879-2057
    ISSN 0001-4575
    DOI 10.1016/j.aap.2023.106956
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  2. Book ; Online: On the Forces of Driver Distraction

    Ebel, Patrick / Lingenfelder, Christoph / Vogelsang, Andreas

    Explainable Predictions for the Visual Demand of In-Vehicle Touchscreen Interactions

    2023  

    Abstract: With modern infotainment systems, drivers are increasingly tempted to engage in secondary tasks while driving. Since distracted driving is already one of the main causes of fatal accidents, in-vehicle touchscreen Human-Machine Interfaces (HMIs) must be ... ...

    Abstract With modern infotainment systems, drivers are increasingly tempted to engage in secondary tasks while driving. Since distracted driving is already one of the main causes of fatal accidents, in-vehicle touchscreen Human-Machine Interfaces (HMIs) must be as little distracting as possible. To ensure that these systems are safe to use, they undergo elaborate and expensive empirical testing, requiring fully functional prototypes. Thus, early-stage methods informing designers about the implication their design may have on driver distraction are of great value. This paper presents a machine learning method that, based on anticipated usage scenarios, predicts the visual demand of in-vehicle touchscreen interactions and provides local and global explanations of the factors influencing drivers' visual attention allocation. The approach is based on large-scale natural driving data continuously collected from production line vehicles and employs the SHapley Additive exPlanation (SHAP) method to provide explanations leveraging informed design decisions. Our approach is more accurate than related work and identifies interactions during which long glances occur with 68 % accuracy and predicts the total glance duration with a mean error of 2.4 s. Our explanations replicate the results of various recent studies and provide fast and easily accessible insights into the effect of UI elements, driving automation, and vehicle speed on driver distraction. The system can not only help designers to evaluate current designs but also help them to better anticipate and understand the implications their design decisions might have on future designs.

    Comment: Accepted for publication in Accident Analysis and Prevention
    Keywords Computer Science - Human-Computer Interaction ; Computer Science - Artificial Intelligence
    Subject code 629
    Publishing date 2023-01-05
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  3. Book ; Online: Automatically Classifying Kano Model Factors in App Reviews

    Binder, Michelle / Vogt, Annika / Bajraktari, Adrian / Vogelsang, Andreas

    2023  

    Abstract: Context and motivation] Requirements assessment by means of the Kano model is common practice. As suggested by the original authors, these assessments are done by interviewing stakeholders and asking them about the level of satisfaction if a certain ... ...

    Abstract [Context and motivation] Requirements assessment by means of the Kano model is common practice. As suggested by the original authors, these assessments are done by interviewing stakeholders and asking them about the level of satisfaction if a certain feature is well implemented and the level of dissatisfaction if a feature is not or not well implemented. [Question/problem] Assessments via interviews are time-consuming, expensive, and can only capture the opinion of a limited set of stakeholders. [Principal ideas/results] We investigate the possibility to extract Kano model factors (basic needs, performance factors, and delighters) from a large set of user feedback (i.e., app reviews). We implemented, trained, and tested several classifiers on a set of 2,592 reviews. In a 10-fold cross-validation, a BERT-based classifier performed best with an accuracy of 0.928. To assess the classifiers' generalization, we additionally tested them on another independent set of 1,622 app reviews. The accuracy of the best classifier dropped to 0.725. We also show that misclassifications correlate with human disagreement on the labels. [Contribution] Our approach is a lightweight and automated alternative for identifying Kano model factors from a large set of user feedback. The limited accuracy of the approach is an inherent problem of missing information about the context in app reviews compared to comprehensive interviews, which also makes it hard for humans to extract the factors correctly.
    Keywords Computer Science - Software Engineering
    Subject code 004
    Publishing date 2023-03-07
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Book ; Online: Multitasking while Driving

    Ebel, Patrick / Lingenfelder, Christoph / Vogelsang, Andreas

    How Drivers Self-Regulate their Interaction with In-Vehicle Touchscreens in Automated Driving

    2023  

    Abstract: Driver assistance systems are designed to increase comfort and safety by automating parts of the driving task. At the same time, modern in-vehicle information systems with large touchscreens provide the driver with numerous options for entertainment, ... ...

    Abstract Driver assistance systems are designed to increase comfort and safety by automating parts of the driving task. At the same time, modern in-vehicle information systems with large touchscreens provide the driver with numerous options for entertainment, information, or communication, and are a potential source of distraction. However, little is known about how driving automation affects how drivers interact with the center stack touchscreen, i.e., how drivers self-regulate their behavior in response to different levels of driving automation. To investigate this, we apply multilevel models to a real-world driving dataset consisting of 31,378 sequences. Our results show significant differences in drivers' interaction and glance behavior in response to different levels of driving automation, vehicle speed, and road curvature. During automated driving, drivers perform more interactions per touchscreen sequence and increase the time spent looking at the center stack touchscreen. Specifically, at higher levels of driving automation (level 2), the mean glance duration toward the center stack touchscreen increases by 36% and the mean number of interactions per sequence increases by 17% compared to manual driving. Furthermore, partially automated driving has a strong impact on the use of more complex UI elements (e.g., maps) and touch gestures (e.g., multitouch). We also show that the effect of driving automation on drivers' self-regulation is greater than that of vehicle speed and road curvature. The derived knowledge can inform the design and evaluation of touch-based infotainment systems and the development of context-aware driver monitoring systems.

    Comment: Accepted for publication in the "International Journal of Human-Computer Interaction". arXiv admin note: substantial text overlap with arXiv:2207.04284
    Keywords Computer Science - Human-Computer Interaction
    Subject code 380
    Publishing date 2023-05-25
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  5. Book ; Online: Exploring Millions of User Interactions with ICEBOAT

    Ebel, Patrick / Gülle, Kim Julian / Lingenfelder, Christoph / Vogelsang, Andreas

    Big Data Analytics for Automotive User Interfaces

    2023  

    Abstract: User Experience (UX) professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for In-Vehicle Information Systems (IVIS) lacks data-driven support and effective ... ...

    Abstract User Experience (UX) professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for In-Vehicle Information Systems (IVIS) lacks data-driven support and effective tools for visualizing and analyzing user interaction data. Therefore, we propose ICEBOAT, an interactive visualization tool tailored to the needs of automotive UX experts to effectively and efficiently evaluate driver interactions with IVISs. ICEBOAT visualizes telematics data collected from production line vehicles, allowing UX experts to perform task-specific analyses. Following a mixed methods User-centered design (UCD) approach, we conducted an interview study (N=4) to extract the domain specific information and interaction needs of automotive UX experts and used a co-design approach (N=4) to develop an interactive analysis tool. Our evaluation (N=12) shows that ICEBOAT enables UX experts to efficiently generate knowledge that facilitates data-driven design decisions.

    Comment: to be published at the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '23), September 18--22, 2023, Ingolstadt, Germany
    Keywords Computer Science - Human-Computer Interaction
    Publishing date 2023-07-12
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  6. Book ; Online: Explanation Needs in App Reviews

    Unterbusch, Max / Sadeghi, Mersedeh / Fischbach, Jannik / Obaidi, Martin / Vogelsang, Andreas

    Taxonomy and Automated Detection

    2023  

    Abstract: Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine ... ...

    Abstract Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine learning, self-adaptive systems). There is relatively little work on what situations and types of behavior should be explained. There is also a lack of support for eliciting explainability requirements. In this work, we explore the need for explanation expressed by users in app reviews. We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs. We also explore several approaches to automatically identify Explanation Needs in app reviews. Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%. Our work contributes to a better understanding of users' Explanation Needs. Automated tools can help engineers focus on these needs and ultimately elicit valid Explanation Needs.
    Keywords Computer Science - Software Engineering
    Subject code 006
    Publishing date 2023-07-10
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book: FoMoStA - Formalisierung, Modellierung und Strukturierung von Anforderungen im Automobil der Zukunft

    Broy, Manfred / Vogelsang, Andreas

    Abschlussbericht : [Laufzeit des Vorhabens: 01.07.2012 - 30.09.2014 (mittelneutral verlängert, ursprünglich 30.06.2014)]

    2015  

    Institution Technische Universität München / Lehrstuhl für Software und Systems Engineering
    Author's details Technische Universität München, [Institut für Informatik, Lehrstuhl Prof. Dr. Dr. h.c. Manfred Broy, Software & Systems Engineering. Ansprechpartner: Andreas Vogelsang]
    Language German
    Size III, 6 S.
    Publishing place München
    Document type Book
    Note Förderkennzeichen BMBF 01IS12028 ; Unterschiede zwischen dem gedruckten Dokument und der elektronischen Ressource können nicht ausgeschlossen werden
    Database Library catalogue of the German National Library of Science and Technology (TIB), Hannover

    More links

    Kategorien

  8. Book ; Online: FoMoStA - Formalisierung, Modellierung und Strukturierung von Anforderungen im Automobil der Zukunft

    Broy, Manfred / Vogelsang, Andreas

    Abschlussbericht : Laufzeit des Vorhabens: 01.07.2012 - 30.09.2014 (mittelneutral verlängert, ursprünglich 30.06.2014)

    2015  

    Institution Technische Universität München / Lehrstuhl für Software und Systems Engineering
    Author's details Technische Universität München, Institut für Informatik, Lehrstuhl Prof. Dr. Dr. h.c. Manfred Broy, Software & Systems Engineering. Ansprechpartner: Andreas Vogelsang
    Language German
    Size Online-Ressource (9 S., 659,95 KB)
    Publisher Technische Informationsbibliothek u. Universitätsbibliothek
    Publishing place Hannover ; München
    Document type Book ; Online
    Note Förderkennzeichen BMBF 01IS12028 ; Unterschiede zwischen dem gedruckten Dokument und der elektronischen Ressource können nicht ausgeschlossen werden
    Database Library catalogue of the German National Library of Science and Technology (TIB), Hannover

    More links

    Kategorien

  9. Book ; Online: How Do Drivers Self-Regulate their Secondary Task Engagements? The Effect of Driving Automation on Touchscreen Interactions and Glance Behavior

    Ebel, Patrick / Berger, Moritz / Lingenfelder, Christoph / Vogelsang, Andreas

    2022  

    Abstract: With ever-improving driver assistance systems and large touchscreens becoming the main in-vehicle interface, drivers are more tempted than ever to engage in distracting non-driving-related tasks. However, little research exists on how driving automation ... ...

    Abstract With ever-improving driver assistance systems and large touchscreens becoming the main in-vehicle interface, drivers are more tempted than ever to engage in distracting non-driving-related tasks. However, little research exists on how driving automation affects drivers' self-regulation when interacting with center stack touchscreens. To investigate this, we employ multilevel models on a real-world driving dataset consisting of 10,139 sequences. Our results show significant differences in drivers' interaction and glance behavior in response to varying levels of driving automation, vehicle speed, and road curvature. During partially automated driving, drivers are not only more likely to engage in secondary touchscreen tasks, but their mean glance duration toward the touchscreen also increases by 12% (Level 1) and 20% (Level 2) compared to manual driving. We further show that the effect of driving automation on drivers' self-regulation is larger than that of vehicle speed and road curvature. The derived knowledge can facilitate the safety evaluation of infotainment systems and the development of context-aware driver monitoring systems.

    Comment: 14th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications
    Keywords Computer Science - Human-Computer Interaction ; H.5.2
    Subject code 380
    Publishing date 2022-07-09
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Book ; Online: CiRA

    Fischbach, Jannik / Frattini, Julian / Vogelsang, Andreas

    A Tool for the Automatic Detection of Causal Relationships in Requirements Artifacts

    2021  

    Abstract: Requirements often specify the expected system behavior by using causal relations (e.g., If A, then B). Automatically extracting these relations supports, among others, two prominent RE use cases: automatic test case derivation and dependency detection ... ...

    Abstract Requirements often specify the expected system behavior by using causal relations (e.g., If A, then B). Automatically extracting these relations supports, among others, two prominent RE use cases: automatic test case derivation and dependency detection between requirements. However, existing tools fail to extract causality from natural language with reasonable performance. In this paper, we present our tool CiRA (Causality detection in Requirements Artifacts), which represents a first step towards automatic causality extraction from requirements. We evaluate CiRA on a publicly available data set of 61 acceptance criteria (causal: 32; non-causal: 29) describing the functionality of the German Corona-Warn-App. We achieve a macro F_1 score of 83%, which corroborates the feasibility of our approach.
    Keywords Computer Science - Software Engineering
    Publishing date 2021-03-11
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

To top