Refine
Year of publication
Document Type
- Master's Thesis (9)
- Bachelor Thesis (7)
- Researchpaper (2)
- Report (1)
Language
- English (19) (remove)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- Deterministic Lockstep (2)
- Gamification (2)
- Networked Games (2)
- Verteilte Systeme (2)
- deep learning (2)
- Abtastung (1)
- Accessibility (1)
- Adoption (1)
- Apache Mesos (1)
- Artificial Intelligence (1)
- Authentication (1)
- Barrierefreiheit (1)
- Bildsensor (1)
- Bildverarbeitung (1)
- Convolutional Neural Network (1)
- Cryptography (1)
- Data Science (1)
- Data protection (1)
- Datenschutz (1)
- Deep Learning (1)
- Digitalkamera (1)
- Distributed Systems (1)
- Docker (1)
- Eingabemethoden (1)
- Erdgashandel (1)
- Experience Design (1)
- FIDO (1)
- Facebook (1)
- Feuerausbreitung (1)
- Fire Propagation (1)
- Fully Convolutional Network (1)
- GPGPU (1)
- Guidelines (1)
- HCD Human-Centered Design (1)
- History (1)
- Hochverfügbarkeit (1)
- Informatik (1)
- Informatik , Grafik , Hardware (1)
- Interaction Design (1)
- Kryptologie (1)
- Linear Programming (1)
- Lineare Optimierung (1)
- Logging (1)
- MMOG (1)
- MMORPG (1)
- Maschinelles Lernen (1)
- Microservices (1)
- Monitoring (1)
- Natural Gas Trading (1)
- Neuronales Netz (1)
- Online-Community (1)
- Operations Research (1)
- Passkeys (1)
- Privacy (1)
- Privatsphäre (1)
- RGB-D (1)
- Richtlinien (1)
- Scalability (1)
- Semantic Segmentation (1)
- Skalierbarkeit (1)
- Social Network (1)
- Soziales Netzwerk (1)
- Spiel (1)
- UNet (1)
- VR (1)
- VR Gesture Recognition (1)
- Virtual reality (1)
- Virtual-reality Gesture Recognition (1)
- WebAuthn (1)
- artificial neural networks (1)
- caffe framework (1)
- container virtualization (1)
- convolutional neural networks (1)
- demosaicing (1)
- docker (1)
- nao (1)
- non-regular sampling (1)
- object recognition (1)
- security (1)
- software supply chain (1)
- threat modeling (1)
Institute
- FB 1: Druck und Medien (19) (remove)
Multiplayer games can increase player enjoyment through social interactions, cooperation and competition. The popularity of such games is shown by current market trends. Especially networked multiplayer games frequently achieve great success, but confront game developers with additional networking challenges in the already complex field of game production. The primary challenge is game state synchronization across all players. Based on the current research, there are three main methods for this task – deterministic lockstep, snapshot interpolation and state-sync – with their own advantages and disadvantages.
This work quantitatively evaluated and discussed the vertical (entity count) and horizontal (player count) limitations of deterministic lockstep and compared the method to snapshot interpolation. Results showed, that deterministic lockstep has no indicated vertical scaling limitation with a player count of up to 10 supporting 16,000 or more entities. A horizontal scaling limitation could not be found either and lockstep was confirmed to work with 40 or more players while handling 1024 entities. However, both scaling dimensions correlate negatively, which was indicated by the maximum scaling configurations 30 players and 4096 entities or 20 players and 8192 entities.
An unoptimized snapshot interpolation implementation achieved a vertical scaling limitation of 4096 entities with 10 players and a horizontal scaling limit of 40 or more players with 1024 entities and therefore was found to have a lower entity limit compared to deterministic lockstep.
Furthermore, results are compared to related work. Other contributions of this thesis include an overview of game networks and the three game state synchronization techniques. An architecture model for deterministic lockstep including a hybrid approach combining it with snapshot interpolation for re-synchronization and hot-joins. And finally, a network packet deconstruction of the implemented networking framework Unity Transport Package (UTP).
Multiplayer games can increase player enjoyment through social interactions, cooperation, and competition. Their market popularity shows the success of especially networked multiplayer games, which pose new networking challenges to game developers. The main challenge is synchronizing game state across players. Research identifies deterministic lockstep, snapshot interpolation, and state-sync as primary methods for this task, each with distinct advantages and disadvantages.
This work, and the master thesis this paper is based on, quantitatively evaluated deterministic lockstep, demonstrating its vertical (entity count) and horizontal (player count) scaling limitations and compares the method to snapshot interpolation. Lockstep supports minimum 16,000 entities for up to 10 players and a horizontal scaling of 40 or more players with 1024 entities. However, a negative correlation between entity and player count limits was observed, which was indicated by the maximum scaling configurations 30 players with 4096 entities or 20 players with 8192 entities. Snapshot interpolation faced a vertical limit with 4096 entities and 10 players and horizontally with 40 or more players and 1024 entities.
The paper further contributes by comparing results to related work, summarizing synchronization methods, proposing a hybrid architecture model of deterministic lockstep with snapshot interpolation for re-synchronization and hot-joins, and deconstructing Unity Transport Package’s (UTP) network packets.
Today’s digital cameras use a mosaic of red, green, and blue color filters to capture images in three color channels on a single sensor plane. This thesis investigates the use of convolutional neural networks (CNNs) for demosaicing – the process of reconstructing full-color images from raw mosaic sensor data. While there are existing CNNs for demosaicing raw images from the well-established regular Bayer color filter array (CFA), this thesis focuses on how they perform on alternative non-regular sampling patterns that produce less aliasing artifacts, namely the stochastic Gaussian- and the RandomQuarter sampling pattern (Backes and Fröhlich, 2020).
A basic UNet (Ronneberger et al., 2015) and the spatially adaptive SANet (T. Zhang et al., 2022) are implemented in a supervised training pipeline based on the PixelShift200 image dataset (Qian et al., 2021) to investigate their suitability for the irregular demosaicing task. The experiments indicate that the basic UNet encounters difficulties in restoring the missing color values, whereas the spatially adaptive convolutional layers help in processing the irregularly sampled raw images.
In addition, this thesis enhances SANet effectiveness by employing an alternative residual branch based on a CFA-normalized Gaussian filter, as well as a tileable modification to the Gaussian CFA pattern. The modified SANet is shown to outperform the conventional dFSR algorithm (Backes & Fröhlich, 2020) in terms of peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM).
Password-based authentication is widely used online, despite its numerous shortcomings, enabling attackers to take over users’ accounts. Phishing-resistant Fast IDentity Online (FIDO) credentials have therefore been proposed to improve account security and authentication user experience. With the recent introduction of FIDO-based passkeys, industry-leading corporations aim to drive widespread adoption of passwordless authentication to eliminate some of the most common account takeover attacks their users are exposed to. This thesis presents the first iteration of a distributed web crawler measuring the adoption of FIDO-based authentication methods on the web to observe ongoing developments and assess the viability of the promised passwordless future. The feasibility of automatically detecting authentication methods is investigated by analyzing crawled web content. Because today’s web is increasingly client-side rendered, capturing relevant data with traditional scraping methods is challenging. Thus, the traditional approach is compared to the browser-based crawling of dynamic content to optimize the detection rate. The results show that authentication method detection is possible, although there are some limitations regarding accuracy and coverage. Moreover, browser-based crawling is found to significantly increase detection rate.
This study investigates the possibility of using Bartle’s player types for gamification
in the context of language learning apps. By taking user preferences into
account, this might assist in selecting the most suitable game elements. Learning
apps are gaining popularity as an innovative method for obtaining an independent
and flexible learning experience. Gamification keeps users motivated and involved
with the content.
After the research on the usage of gamification and its effects on the user, a language
learning app prototype was created. The evaluation consisted of a user test with
interview questions and the short User Experience Questionnaire (UEQ). The Bartle
test of gamer psychology was used to determine the player types of the participants.
The results show that, while player type and gamification preference can partially
coincide, there are too many deviations to confidently say it can be transferred into
gamification contexts. We conclude that game elements should not be chosen based
on a user’s Bartle player type and are more effectively used by incorporating a variety
of different gamification components.
The number of people with cognitive impairments increases together with the aging population. Thus, social robots are being researched to aid relieve the nursing
sector as well as to combat cognitive impairments. However, it raises concerns regarding how a social robot should relate to members of this group and what might
be appropriate. In this thesis, research about the current state of social robots has been conducted and focus groups with people from the nursing and medical field were held. To verify the credibility of the results and the scenario developed, final
user tests were conducted with representatives of the target group. When using a
social robot in an interaction with persons who have cognitive disabilities, the robot
should speak and behave more human-like and make use of its facial expressions,
stressing empathy and responding to the person accordingly. Though the situation
of interacting with a social robot may be more significant in future generations.
Virtual-reality (VR) is an immersive technology with a growing market and many applications for gesture recognition. This thesis presents a VR gesture recognition method using signal processing techniques. The core concept is based on the comparison of motion features in the form of signals between a runtime recording of users and a possible gesture set. This comparison yields a similarity score through which the most similar gesture can be recognized by a continuous recognition system. Some selected comparison methods are presented, evaluated and discussed. An example implementation is demonstrated. However, due to an introduced layer model parts of the method and its implementation are interchangeable.
Similar or even better performance is achieved compared to other related work. The comparison method Dynamic Time Warping (DTW) reaches an average positive recognitions rate of 98.18% with acceptable real-time application performance. Additionally, the method comes with some benefits: position and direction of users is irrelevant, body proportions have no significant negative impact on recognition rates, faster and slower gesture executions are possible, no user inputs are needed to communicate gesture start and end (continuous recognition), also continuous gestures can be recognized, and the recognition is fast enough to trigger gesture specific events already during the execution.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
Video games have a significant influence on our time. However, lack of accessibility makes it hard for disabled gamers to play most of them. Virtual reality offers new possibilities to include people with disabilities and enable them to play games. Additionally, serious VR games provide educational benefits, such as improved memory and engagement.
In this work, the accessibility problems in video games and VR applications are explored with an emphasis on serious games as well as a general lack of guidelines. An overview of existing guidelines is given. From this, a set of guidelines is derived that summarizes the relevant rules for accessible VR games.
New ways to interact with VR environments come with both opportunities and challenges. This work investigates the applicability of different hands-free input methods to play a VR game. Using a serious game five focus and three activation methods were implemented exemplary with the Oculus Go. The suitability of these methods was analyzed in a pre-study that excluded head movements for controlling the game. The remaining input methods were evaluated in an explorative user study in terms of operability and ease of use.In summary, all tested methods can be used to control the game. The evaluation shows head-tracking as the preferred input method, while scanning eye-tracking and voice control were rated mediocre.
In addition, the correlation between input methods and different menu types was examined, but the influence turned out to be negligible.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.
Privacy in Social Networks
(2016)
Online Social Networks (OSNs) are heavily used today and despite of all privacy concerns found a way into our daily life. After showing how heavy data collection is a violation of the user's privacy, this thesis establishes mandatory and optional requirements for a Privacy orientated Online Social Network (POSN). It evaluates twelve existing POSNs in general and in regard to those requirements. The paper will find that none of these POSNs are able to fulfill the requirements and therefore proposes features and patterns as a reference architecture.
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
Large-scale computing platforms, like the IBM System z mainframe, are often administrated in an out-of-band manner, with a large portion of the systems management software running on dedicated servers which cause extra hardware costs. Splitting up systems management applications into smaller services and spreading them over the platform itself likewise is an approach that potentially helps with increasing the utilization of platform-internal resources, while at the same time lowering the need for external server hardware, which would reduce the extra costs significantly. However, with regard to IBM System z, this raises the general question how a great number of critical services can be run and managed reliably on a heterogeneous computing landscape, as out-of-band servers and internal processor modules do not share the same processor architecture.
In this thesis, we introduce our prototypical design of a microservice infrastructure for multi-architecture environments, which we completely built upon preexisting open source projects and features they already bring along. We present how scheduling of services according to application-specific requirements and particularities can be achieved in a way that offers maximum transparency and comfort for platform operators and users.
Nowadays more and more companies use agile software development to build software in short release cycles. Monolithic applications are split into microservices, which can independently be maintained and deployed by agile teams. Modern platforms like Docker support this process. Docker offers services to containerize such services and orchestrate them in a container cluster. A software supply chain is the umbrella term for the process of developing, automated building and testing, as well as deploying a complete application. By combining a software supply chain and Docker, those processes can be automated in standardized environments. Since Docker is a young technology and software supply chains are critical processes in organizations, security needs to be reviewed. In this work a software supply chain based on Docker is built and a threat modeling process is used to assess its security. The main components are modeled and threats are identified using STRIDE. Afterwards risks are calculated and methods to secure the software supply chain based on security objectives confidentiality, integrity and availability are discussed. As a result, some components require special treatments in security context since they have a high residual risk of being targeted by an attacker. This work can be used as basis to build and secure the main components of a software supply chain. However additional components such as logging, monitoring as well as integration into existing business processes need to be reviewed.
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
Deep learning methods have proven highly effective for object recognition tasks, especially
in the form of artificial neural networks. In this bachelor’s thesis, a way is shown to imple-
ment a ready-to-use object recognition implementation on the NAO robotic platform using
Convolutional Neural Networks based on pretrained models. Recognition of multiple objects
at once is realized with the help of the Multibox algorithm. The implementation’s object
recognition rates are evaluated and analyzed in several tests.
Furthermore, the implementation offers a graphical user interface with several options to
adjust the recognition process and for controlling movements of the robot’s head in order
to easier acquire objects in the field of view. Additionally, a dialogue system for querying
further results is presented.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.
Talking about highly scalable and reliable sys-
tems, issues like logging and monitoring are often
disregarded. However, being able to manage to-
day’s software systems absolutely requires deep
knowledge about the current state of applications
as well as the underlying infrastructure. Extract-
ing and preparing debug information as well as
various metrics in a fast and clearly arranged
manner is an essential precondition in order to
handle this task.
Since we at Bertsch Innovation GmbH also
face increasing requirements concerning Media-
Cockpit as one of our core products, we decided
to establish a centralized logging infrastructure
in order to come up to the application’s evolution
towards a more and more distributed system.
In this paper, I want to describe the steps
that I have taken in order to setup a functioning
logging tool stack consisting of Elasticsearch,
Logstash and Kibana (usually abbreviated as ELK stack ). Besides outlining proper
setup and configuration, I will also discuss possi-
ble pitfalls as well as custom adjustments made
when ELK did not meet our demands.