Multiple Degrees-Of-Freedom Input Devices for Interactive Command and Control within Virtual Reality in Industrial Visualisations

Student thesis: Phd


Virtual Reality (VR) applications have often proven practical and effective in offering users an increased sense of presence and engagement when exploring inside a virtual environment (VE) creation. However, numerous user experience problems still occur at every stage within the application design, particularly for industrial visualisations (VR applications created exclusively for the industry). Students or employees need to be familiar with the essential features of these VR applications to complete their tasks, which can be hard when they come from different backgrounds. Industrial visualisations may contain multiple features that make them complex to map to a set of input mechanisms for user interaction, requiring training and learning skills. Training can take hours to months or years, depending on the users’ skills and interfacing with such environments. On the other hand, according to many research studies, having input devices that allow up to six degrees of freedom (6DOF) is a functional minimum to interface with these 3D environments. However, it is asserted that, depending on the task, there are cases in which users may need more DOF. Therefore, this thesis aims to design, build and implement a layered computing framework with a built-in input devices ontology and a strictly defined set of sub-APIs between the layers that intelligently connect multiple input devices to multiple application commands calls, enabling multiple DOF simultaneously. By leveraging a large number of DOF, users can interact with different input devices, allowing them to have more intuitive and natural control and manipulation of 3D objects in industrial visualisations and potentially master these VR applications in a short time. Empirical evaluations and case studies in industrial fields are presented that combine linear and non-linear function transformations with a comparison system. This study set out to demonstrate that by combining human spatial reasoning and computer graphics theory technologies, a framework like the one presented here can improve users' ability to understand, test and evaluate, reengineer, and then communicate better virtual behaviour.
Date of Award1 Aug 2023
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorDavid Morris (Supervisor) & Martin Turner (Supervisor)


  • 3D Interaction
  • User Interface Management Systems
  • User Experience
  • 3D User Interfaces (UI)
  • X-ray Computed Tomography (CT)
  • Input Devices
  • Data Visualization
  • Input Device Taxonomies
  • Virtual Reality (VR)
  • Scientific Visualization

Cite this