Knowledge Spaces in Virtual Reality: Intuitive Interfacing with a Multiperspective Hypermedia Environment
Project description
Interfaces between our real and a digital world have the potential to facilitate knowledge work in educational and scientific contexts. Multiperspective Hypermedia Environments (MHEs) can be accessed by means of such interfaces. MHE are essentially datasets that enable the generation of multiple perspectives onto the data, such as medical data, data about the contents of a museum, or even data about a large group of animals or plants. Each perspective corresponds to a data display in a panel within which the data – or parts of it – are arranged with respect to particular feature axis, in clusters, or in a graph structure. Indeed, it was shown that MHEs equipped with elaborated 2D interfaces can support multiperspective reasoning skills (MPRSs), developing the ability to reason about the shown context in an elaborate, relational manner by taking on and combining different perspectives on the knowledge content. However, only learners with sufficient cognitive capacity at their disposal profited from MHEs.
Our analysis suggests that the provided 2D interfaces onto the MHEs are highly demanding in terms of the necessary cognitive resources to (i) interact with the different knowledge displays (panels) representing perspectives on a topic, (ii) orient oneself within each panel, and (iii) inter-relate contents in different panels. Based on design principles derived from theories of event segmentation, predictive processing, and embodied cognition, we propose to create and study interactive, highly immersive 3D-interfaces to MHEs using virtual reality (VR) technology. The goal is to facilitate the development of MPRSs further, especially for learners with lower cognitive capacities, by (i) implementing intuitive embodied interactions with panels arranged in the virtual knowledge space by means of a sensorimotor user interface, (ii) supporting the fast orientation within panels by optimal semantic-spatial mappings and the provision of gist-like information about content arrangements, (iii) fostering mental inter-relations of content on different panels according to spatial arrangement and visual cues, making content inter-relations directly accessible. To evaluate the developed 3D cognitive interface, we will contrast learners’ MPRSs when working with the available 2D and the targeted 3D interface in two diverse knowledge domains, namely marine biodiversity and art history.
Project team
- Prof. Dr. Martin V. Butz, Fachbereich Informatik und Psychologie, Universität Tübingen
- Prof. Dr. Peter Gerjets, IWM
- Dr. Martin Lachmair, Leibniz-Institut für Wissensmedien
- Dr. Johannes Lohmann, Fachbereich Informatik, Universität Tübingen
- Dania Humaidan, Fachbereich Informatik, Universität Tübingen
- Mahdi Sadeghi, Fachbereich Informatik, Universität Tübingen