Refine
Year of publication
Document Type
- Book (11)
- Bachelor Thesis (10)
- Master's Thesis (8)
- Other (1)
- Report (1)
- Researchpaper (1)
Keywords
- Verlag (4)
- Buchhandel (2)
- Digitaldruck (2)
- Gamification (2)
- Nachhaltigkeit (2)
- Publizieren (2)
- Verteilte Systeme (2)
- Apache Mesos (1)
- App (1)
- Artificial Intelligence (1)
Institute
- FB 1: Druck und Medien (32) (remove)
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
Moderne Smartphones wie das iPhone sind die ersten massentauglichen mobilen Geräte, auf denen Augmented-Reality-Anwendungen genutzt werden können.
Mit der zunehmenden Verbreitung solcher Smartphones prognostizieren Analysten nun ein rasantes Wachstum für den Augmented-Reality-Markt.
Im Rahmen der vorliegenden Masterarbeit wird Augmented Reality (kurz AR) unter dem Gesichtspunkt des Kundennutzens von content-basierten Anwendungen in Printprodukten untersucht.
Es wird insbesondere darauf eingegangen, wie es Verlagen gelingt, einen Mehrwert für ihre Kunden zu generieren und ihre gedruckten Produkte dadurch attraktiver zu gestalten.
Zu diesem Zweck werden in einem selbst entwickelten Kundennutzenmodell verschiedene Branchensegmente analysiert und mit passenden Anwendungsbeispielen illustriert.
Außerdem werden im letzten Teil der Arbeit die Kunden dazu befragt, wie sie selbst den Zusatznutzen von Augmented-Reality-Anwendungen einschätzen und ob sie bereit sind, für den Mehrwert zu bezahlen.
Talking about highly scalable and reliable sys-
tems, issues like logging and monitoring are often
disregarded. However, being able to manage to-
day’s software systems absolutely requires deep
knowledge about the current state of applications
as well as the underlying infrastructure. Extract-
ing and preparing debug information as well as
various metrics in a fast and clearly arranged
manner is an essential precondition in order to
handle this task.
Since we at Bertsch Innovation GmbH also
face increasing requirements concerning Media-
Cockpit as one of our core products, we decided
to establish a centralized logging infrastructure
in order to come up to the application’s evolution
towards a more and more distributed system.
In this paper, I want to describe the steps
that I have taken in order to setup a functioning
logging tool stack consisting of Elasticsearch,
Logstash and Kibana (usually abbreviated as ELK stack ). Besides outlining proper
setup and configuration, I will also discuss possi-
ble pitfalls as well as custom adjustments made
when ELK did not meet our demands.
Nowadays more and more companies use agile software development to build software in short release cycles. Monolithic applications are split into microservices, which can independently be maintained and deployed by agile teams. Modern platforms like Docker support this process. Docker offers services to containerize such services and orchestrate them in a container cluster. A software supply chain is the umbrella term for the process of developing, automated building and testing, as well as deploying a complete application. By combining a software supply chain and Docker, those processes can be automated in standardized environments. Since Docker is a young technology and software supply chains are critical processes in organizations, security needs to be reviewed. In this work a software supply chain based on Docker is built and a threat modeling process is used to assess its security. The main components are modeled and threats are identified using STRIDE. Afterwards risks are calculated and methods to secure the software supply chain based on security objectives confidentiality, integrity and availability are discussed. As a result, some components require special treatments in security context since they have a high residual risk of being targeted by an attacker. This work can be used as basis to build and secure the main components of a software supply chain. However additional components such as logging, monitoring as well as integration into existing business processes need to be reviewed.
Gegenstand der hier vorgestellten Arbeit ist die Betrachtung von Technologien, die das Infotainmentsystem
von Fahrzeugen mit mobilen Endgeräten verbinden. Dabei werden Eigenschaften wie der
technische Aufbau der Technologien, Angebotsbreite und weitere relevante Attribute analysiert, die
anschließend innerhalb einer Matrix gegenüber gestellt werden. Nachfolgend werden die Geschäftsmodelle
der Unternehmen analysiert, die die zuvor betrachteten Verbindungstechnologien
anbieten. Als Vorgehensweise wurde das Business Model Canvas von Alexander Osterwalder und
Yves Pigneur ausgewählt, da es alle wesentlichen Elemente eines Geschäftsmodells einbeziehen und
deren Zusammenspiel betrachten. Um ein abschließendes Fazit zu formulieren, werden die Auswirkungen
der Geschäftsmodelle auf Automobilhersteller und Entwickler von Applikationen erörtert und
ausgewertet. Durch die sich ergebenden Risiken und Potenziale wird eine sinnvolle Wahl von Technologien
getroffen, die in die zukünftigen Geschäftsmodelle von Automobilherstellern und ApplikationsEntwicklern
einbezogen werden sollten.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
Moderne Programme bewältigen immer komplexere und leistungsfordernde Aufgaben. Mit diesem Anstieg geht jedoch ein höherer Bedarf an Hardware-Ressourcen einher, insbesondere an höheren Prozessorkapazitäten. Diesem Trend wurde mit einer konstanten Erhöhung der Taktraten von Prozessoren begegnet. Doch seit 2005 wurde dieser Trend aufgrund von physikalischen Grenzen gebremst. Stattdessen installieren Prozessorhersteller nun mehrere Prozessorkerne mit geringerer Taktrate auf einem Prozessor. Dies führt auch zu neuen Programmiertechniken, die Programme auf mehreren Prozessorkernen verteilen. Sie stellen einen sicheren Datenzugriff, deterministische Ausführung und Leistungsverbesserungen sicher. Ursprünglich mussten Programmierer diese Techniken manuell programmieren, heute existieren Technologien, die eine solche Verwaltung automatisch durchführen.
In dieser Thesis werden verschiedene High-Level Programmiertechniken anhand einer Beispielanwendung hinsichtlich ihrer Leistung, Ressourcenverwaltung und Bedienbarkeit verglichen. Die Beispielanwendung soll eine tatsächlich einsetzbare Anwendung repräsentieren, die grundlegende Probleme, wie voneinander unabhängige und abhängige Berechnungsschritte aufweist, weshalb eine Physiksimulation gewählt wurde. Die Parallelisierung wurde mit Goroutinen, Java Parallel Streams, Thread Pools und C++ async-Funktionen in ihrer jeweiligen Programmiersprache realisiert.
Um die verschiedenen Parallelisierungstechniken zu vergleichen, wurden mehrere Merkmale der parallelen Implementierungen gemessen und mit einer sequentiellen Referenzimplementierung verglichen. Um die Leistung der Techniken zu messen, wurden die Ausführungszeiten der verschieden Simulationen gemessen und analysiert. Die Ressourcenverwaltung wurde anhand der Prozessorauslastung der verschiedenen Implementierungen verglichen. Um die Bedienbarkeit der verschiedenen Parallelisierungstechniken gegenüberzustellen, wurde die Anzahl der Quelltextzeilen ermittelt und in Relation gesetzt. Die Analyse dieser Daten zeigt die Unterschiede der Parallelisierungstechniken. Während die Implementierung unter Nutzung von Java Parallel Streams hohe Prozessorauslastung, und, verglichen mit den anderen Techniken, einen hohen Beschleunigungsfaktor sowie geringe Komplexität aufweist, kann die Implementierung mit Hilfe von C++ async-Funktion nicht mehrere Prozessorkerne auslasten und damit nicht die Vorteile von Parallelisierung ausnutzen. Die hohe Komplexität der Implementierung mit Goroutinen zahlt sich durch vergleichsweise geringe Ausführungszeiten trotz niedriger Prozessorauslastung aus.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.