Refine
Year of publication
- 2021 (3) (remove)
Document Type
- Master's Thesis (2)
- Bachelor Thesis (1)
Language
- English (3) (remove)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Accessibility (1)
- Barrierefreiheit (1)
- Eingabemethoden (1)
- Guidelines (1)
- Richtlinien (1)
- VR (1)
- VR Gesture Recognition (1)
- Virtual reality (1)
- Virtual-reality Gesture Recognition (1)
Institute
- FB 1: Druck und Medien (3) (remove)
Video games have a significant influence on our time. However, lack of accessibility makes it hard for disabled gamers to play most of them. Virtual reality offers new possibilities to include people with disabilities and enable them to play games. Additionally, serious VR games provide educational benefits, such as improved memory and engagement.
In this work, the accessibility problems in video games and VR applications are explored with an emphasis on serious games as well as a general lack of guidelines. An overview of existing guidelines is given. From this, a set of guidelines is derived that summarizes the relevant rules for accessible VR games.
New ways to interact with VR environments come with both opportunities and challenges. This work investigates the applicability of different hands-free input methods to play a VR game. Using a serious game five focus and three activation methods were implemented exemplary with the Oculus Go. The suitability of these methods was analyzed in a pre-study that excluded head movements for controlling the game. The remaining input methods were evaluated in an explorative user study in terms of operability and ease of use.In summary, all tested methods can be used to control the game. The evaluation shows head-tracking as the preferred input method, while scanning eye-tracking and voice control were rated mediocre.
In addition, the correlation between input methods and different menu types was examined, but the influence turned out to be negligible.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
Virtual-reality (VR) is an immersive technology with a growing market and many applications for gesture recognition. This thesis presents a VR gesture recognition method using signal processing techniques. The core concept is based on the comparison of motion features in the form of signals between a runtime recording of users and a possible gesture set. This comparison yields a similarity score through which the most similar gesture can be recognized by a continuous recognition system. Some selected comparison methods are presented, evaluated and discussed. An example implementation is demonstrated. However, due to an introduced layer model parts of the method and its implementation are interchangeable.
Similar or even better performance is achieved compared to other related work. The comparison method Dynamic Time Warping (DTW) reaches an average positive recognitions rate of 98.18% with acceptable real-time application performance. Additionally, the method comes with some benefits: position and direction of users is irrelevant, body proportions have no significant negative impact on recognition rates, faster and slower gesture executions are possible, no user inputs are needed to communicate gesture start and end (continuous recognition), also continuous gestures can be recognized, and the recognition is fast enough to trigger gesture specific events already during the execution.