Refine
Year of publication
Document Type
- Master's Thesis (9)
- Bachelor Thesis (7)
- Researchpaper (2)
- Report (1)
Language
- English (19) (remove)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- Deterministic Lockstep (2)
- Gamification (2)
- Networked Games (2)
- Verteilte Systeme (2)
- deep learning (2)
- Abtastung (1)
- Accessibility (1)
- Adoption (1)
- Apache Mesos (1)
- Artificial Intelligence (1)
Institute
- FB 1: Druck und Medien (19) (remove)
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.
Talking about highly scalable and reliable sys-
tems, issues like logging and monitoring are often
disregarded. However, being able to manage to-
day’s software systems absolutely requires deep
knowledge about the current state of applications
as well as the underlying infrastructure. Extract-
ing and preparing debug information as well as
various metrics in a fast and clearly arranged
manner is an essential precondition in order to
handle this task.
Since we at Bertsch Innovation GmbH also
face increasing requirements concerning Media-
Cockpit as one of our core products, we decided
to establish a centralized logging infrastructure
in order to come up to the application’s evolution
towards a more and more distributed system.
In this paper, I want to describe the steps
that I have taken in order to setup a functioning
logging tool stack consisting of Elasticsearch,
Logstash and Kibana (usually abbreviated as ELK stack ). Besides outlining proper
setup and configuration, I will also discuss possi-
ble pitfalls as well as custom adjustments made
when ELK did not meet our demands.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
Deep learning methods have proven highly effective for object recognition tasks, especially
in the form of artificial neural networks. In this bachelor’s thesis, a way is shown to imple-
ment a ready-to-use object recognition implementation on the NAO robotic platform using
Convolutional Neural Networks based on pretrained models. Recognition of multiple objects
at once is realized with the help of the Multibox algorithm. The implementation’s object
recognition rates are evaluated and analyzed in several tests.
Furthermore, the implementation offers a graphical user interface with several options to
adjust the recognition process and for controlling movements of the robot’s head in order
to easier acquire objects in the field of view. Additionally, a dialogue system for querying
further results is presented.
Privacy in Social Networks
(2016)
Online Social Networks (OSNs) are heavily used today and despite of all privacy concerns found a way into our daily life. After showing how heavy data collection is a violation of the user's privacy, this thesis establishes mandatory and optional requirements for a Privacy orientated Online Social Network (POSN). It evaluates twelve existing POSNs in general and in regard to those requirements. The paper will find that none of these POSNs are able to fulfill the requirements and therefore proposes features and patterns as a reference architecture.
Nowadays more and more companies use agile software development to build software in short release cycles. Monolithic applications are split into microservices, which can independently be maintained and deployed by agile teams. Modern platforms like Docker support this process. Docker offers services to containerize such services and orchestrate them in a container cluster. A software supply chain is the umbrella term for the process of developing, automated building and testing, as well as deploying a complete application. By combining a software supply chain and Docker, those processes can be automated in standardized environments. Since Docker is a young technology and software supply chains are critical processes in organizations, security needs to be reviewed. In this work a software supply chain based on Docker is built and a threat modeling process is used to assess its security. The main components are modeled and threats are identified using STRIDE. Afterwards risks are calculated and methods to secure the software supply chain based on security objectives confidentiality, integrity and availability are discussed. As a result, some components require special treatments in security context since they have a high residual risk of being targeted by an attacker. This work can be used as basis to build and secure the main components of a software supply chain. However additional components such as logging, monitoring as well as integration into existing business processes need to be reviewed.
Large-scale computing platforms, like the IBM System z mainframe, are often administrated in an out-of-band manner, with a large portion of the systems management software running on dedicated servers which cause extra hardware costs. Splitting up systems management applications into smaller services and spreading them over the platform itself likewise is an approach that potentially helps with increasing the utilization of platform-internal resources, while at the same time lowering the need for external server hardware, which would reduce the extra costs significantly. However, with regard to IBM System z, this raises the general question how a great number of critical services can be run and managed reliably on a heterogeneous computing landscape, as out-of-band servers and internal processor modules do not share the same processor architecture.
In this thesis, we introduce our prototypical design of a microservice infrastructure for multi-architecture environments, which we completely built upon preexisting open source projects and features they already bring along. We present how scheduling of services according to application-specific requirements and particularities can be achieved in a way that offers maximum transparency and comfort for platform operators and users.
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.