Refine
Year of publication
Document Type
- Bachelor Thesis (14)
- Master's Thesis (12)
- Researchpaper (4)
- Periodicalpart (3)
- Study Thesis (3)
- Book (2)
- Diploma Thesis (2)
- Article (1)
- Report (1)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- Computergraphik (2)
- Deep Learning (2)
- Deterministic Lockstep (2)
- Gamification (2)
- International Media Management, International Strategy ,Global Strategy ,Transnational Media Corporations ,Media Management ,International Management (2)
- Maschinelles Lernen (2)
- Networked Games (2)
- Verteilte Systeme (2)
- Visual Effects (2)
- deep learning (2)
Institute
- FB 1: Druck und Medien (19)
- Computer Science and Media (Master) (5)
- FB 2: Electronic Media (4)
- Bibliotheks- und Medienmanagement (Bachelor, Diplom) (3)
- FB 3: Information und Kommunikation (3)
- Audiovisuelle Medien (Bachelor, Diplom) (2)
- Bibliotheks- und Informationsmanagement (Bachelor, Master) (2)
- Medieninformatik (Bachelor, Diplom) (2)
- Bibliothek (1)
- Mobile Medien (Bachelor) (1)
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
Video games have a significant influence on our time. However, lack of accessibility makes it hard for disabled gamers to play most of them. Virtual reality offers new possibilities to include people with disabilities and enable them to play games. Additionally, serious VR games provide educational benefits, such as improved memory and engagement.
In this work, the accessibility problems in video games and VR applications are explored with an emphasis on serious games as well as a general lack of guidelines. An overview of existing guidelines is given. From this, a set of guidelines is derived that summarizes the relevant rules for accessible VR games.
New ways to interact with VR environments come with both opportunities and challenges. This work investigates the applicability of different hands-free input methods to play a VR game. Using a serious game five focus and three activation methods were implemented exemplary with the Oculus Go. The suitability of these methods was analyzed in a pre-study that excluded head movements for controlling the game. The remaining input methods were evaluated in an explorative user study in terms of operability and ease of use.In summary, all tested methods can be used to control the game. The evaluation shows head-tracking as the preferred input method, while scanning eye-tracking and voice control were rated mediocre.
In addition, the correlation between input methods and different menu types was examined, but the influence turned out to be negligible.
Talking about highly scalable and reliable sys-
tems, issues like logging and monitoring are often
disregarded. However, being able to manage to-
day’s software systems absolutely requires deep
knowledge about the current state of applications
as well as the underlying infrastructure. Extract-
ing and preparing debug information as well as
various metrics in a fast and clearly arranged
manner is an essential precondition in order to
handle this task.
Since we at Bertsch Innovation GmbH also
face increasing requirements concerning Media-
Cockpit as one of our core products, we decided
to establish a centralized logging infrastructure
in order to come up to the application’s evolution
towards a more and more distributed system.
In this paper, I want to describe the steps
that I have taken in order to setup a functioning
logging tool stack consisting of Elasticsearch,
Logstash and Kibana (usually abbreviated as ELK stack ). Besides outlining proper
setup and configuration, I will also discuss possi-
ble pitfalls as well as custom adjustments made
when ELK did not meet our demands.
Nowadays more and more companies use agile software development to build software in short release cycles. Monolithic applications are split into microservices, which can independently be maintained and deployed by agile teams. Modern platforms like Docker support this process. Docker offers services to containerize such services and orchestrate them in a container cluster. A software supply chain is the umbrella term for the process of developing, automated building and testing, as well as deploying a complete application. By combining a software supply chain and Docker, those processes can be automated in standardized environments. Since Docker is a young technology and software supply chains are critical processes in organizations, security needs to be reviewed. In this work a software supply chain based on Docker is built and a threat modeling process is used to assess its security. The main components are modeled and threats are identified using STRIDE. Afterwards risks are calculated and methods to secure the software supply chain based on security objectives confidentiality, integrity and availability are discussed. As a result, some components require special treatments in security context since they have a high residual risk of being targeted by an attacker. This work can be used as basis to build and secure the main components of a software supply chain. However additional components such as logging, monitoring as well as integration into existing business processes need to be reviewed.
The goal of this thesis is to develop a novel type of virtual heritage medium that utilises the combined immersive and engaging potentials of interactive mixed reality environments and spatial narratives. Concretely, this is achieved through depth-sensitive compositing of real-time 3D content into the live-video of a tracked smartphone. The user can explore this mixed reality environment, watch the actions of staged 3D characters as well as interact with them and virtual artifacts. This medium would therefore provide possibilities for telling stories in direct context with existing environments along with an immersive and engaging media experience. This work will mainly focus on how this medium can be used as an edutainment medium in sites of cultural heritage. This thesis will focus on establishing the technical requirements and realisation possibilities for implementation in Unity on iPhone 5 / iOS 7. Subsequently, a prototype is implemented in order to prove the research results.
Concepts and Services for Asylum Seekers in Public Libraries Using the Example of Germany and Norway
(2016)
The goal of the following bachelor thesis is to introduce concepts of public libraries concerning asylum seekers. As an example the thesis is using public libraries in Germany and Norway. Therefore, the reader will be introduced to the general situation, living conditions and preconditions of asylum seekers in both countries as well as to preconditions of libraries and librarians concerning monetary and territorial aspects and education of library staff. Important international library representatives as well as local actors will be introduced and the importance of cooperation between libraries and other organizations will be examined. In the main part practical methods, services, offers and ways of how libraries can help asylum seekers will be elaborated and possibilities how asylum seekers can actively participate in the library will be explained. Challenges which can occur will be detected and elaborated. Furthermore, the public library of Bergen in Norway and the public library of Duisburg in Germany will be presented as best practice examples.
The legitimacy of users is of great importance for the security of information systems. The authentication process is a trade-off between system security and user experience. E.g., forced password complexity or multi-factor authentication can increase protection, but the application becomes more cumbersome for the users. Therefore, it makes sense to investigate whether the identity of a user can be verified reliably enough, without his active participation, to replace or supplement existing login processes.
This master thesis examines if the inertial sensors of a smartphone can be leveraged to continuously determine whether the device is currently in possession of its legitimate owner or by another person. To this end, an approach proposed in related studies will be implemented and examined in detail. This approach is based on the use of a so-called Siamese artificial neural network to transform the measured values of the sensors into a new vector that can be classified more reliably.
It is demonstrated that the reported results of the proposed approach can be reproduced under certain conditions. However, if the same model is used under conditions that are closer to a real-world application, its reliability decreases significantly. Therefore, a variant of the proposed approach is derived whose results are superior to the original model under real conditions.
The thesis concludes with concrete recommendations for further development of the model and provides methodological suggestions for improving the quality of research in the topic of "Continuous Authentication".
Diese Diplomarbeit beschreibt den Prozess bei der Herstellung einer Web-Site für einen Verein. Der Projekt wird eingeschätzt, die Zielgruppe und die Bedürfnisse des Vereins werden definiert, um einen Konzept für die Web-Site herauszufinden. Dann werden die Zeitplanung, die Tätigkeitplanung und den Begriff Teammanagement im Kontext analysiert. Der dritte Teil bechreibt die getroffenen Entscheidungen bei den Screen- und Interface-Design. Im vierten findet man Information über die Installierung des Sites ins Netz, über die Anmeldungsmethoden an Suchmaschinen.und über die Aktualisierung.
With the increasing use of visual effects in feature films, TV series and commercials, flexibility becomes essential to create astonishing pictures while meeting tight production schedules. Deep image compositing introduces new possibilities that increase flexibility and solve old problems of depth based compositing. The following thesis gives an introduction to deep image compositing, illustrating its power and analyzing its use in a modern visual effects pipeline.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.
Multiplayer games can increase player enjoyment through social interactions, cooperation, and competition. Their market popularity shows the success of especially networked multiplayer games, which pose new networking challenges to game developers. The main challenge is synchronizing game state across players. Research identifies deterministic lockstep, snapshot interpolation, and state-sync as primary methods for this task, each with distinct advantages and disadvantages.
This work, and the master thesis this paper is based on, quantitatively evaluated deterministic lockstep, demonstrating its vertical (entity count) and horizontal (player count) scaling limitations and compares the method to snapshot interpolation. Lockstep supports minimum 16,000 entities for up to 10 players and a horizontal scaling of 40 or more players with 1024 entities. However, a negative correlation between entity and player count limits was observed, which was indicated by the maximum scaling configurations 30 players with 4096 entities or 20 players with 8192 entities. Snapshot interpolation faced a vertical limit with 4096 entities and 10 players and horizontally with 40 or more players and 1024 entities.
The paper further contributes by comparing results to related work, summarizing synchronization methods, proposing a hybrid architecture model of deterministic lockstep with snapshot interpolation for re-synchronization and hot-joins, and deconstructing Unity Transport Package’s (UTP) network packets.
Multiplayer games can increase player enjoyment through social interactions, cooperation and competition. The popularity of such games is shown by current market trends. Especially networked multiplayer games frequently achieve great success, but confront game developers with additional networking challenges in the already complex field of game production. The primary challenge is game state synchronization across all players. Based on the current research, there are three main methods for this task – deterministic lockstep, snapshot interpolation and state-sync – with their own advantages and disadvantages.
This work quantitatively evaluated and discussed the vertical (entity count) and horizontal (player count) limitations of deterministic lockstep and compared the method to snapshot interpolation. Results showed, that deterministic lockstep has no indicated vertical scaling limitation with a player count of up to 10 supporting 16,000 or more entities. A horizontal scaling limitation could not be found either and lockstep was confirmed to work with 40 or more players while handling 1024 entities. However, both scaling dimensions correlate negatively, which was indicated by the maximum scaling configurations 30 players and 4096 entities or 20 players and 8192 entities.
An unoptimized snapshot interpolation implementation achieved a vertical scaling limitation of 4096 entities with 10 players and a horizontal scaling limit of 40 or more players with 1024 entities and therefore was found to have a lower entity limit compared to deterministic lockstep.
Furthermore, results are compared to related work. Other contributions of this thesis include an overview of game networks and the three game state synchronization techniques. An architecture model for deterministic lockstep including a hybrid approach combining it with snapshot interpolation for re-synchronization and hot-joins. And finally, a network packet deconstruction of the implemented networking framework Unity Transport Package (UTP).
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.
The Eclipse rich client platform as container for componentoriented plugins provides a framework to host plugins, which concerning its look and feelembed well in a client workstation. J2EE client container provide a runtime environment for applications, integrated in a multitier architecture and therefore have to access services Java 2 Enterprise Edition (J2EE). Combining the two container approaches will create a new runtime environment for application clients, which appear in the user interface style of Eclipse and are able to take up the J2EE services. This diploma thesis discusses concepts of combining Eclipse and the client container.
Evaluating a forthcoming international bibliographic research database in form of a Zotero group
(2014)
Purpose – In order to connect the various international research hubs on physical learning spaces, a large-scale research database has been developed, using a Zotero group. Hitherto, its interface and collection index has never been examined for usability. This pilot study attempts to discover what retrieval strategy combinations users apply in the Zotero web interface, and how satisfied they are with the usability and the retrieval outcomes. The results shall not just generate ideas for the improvement of the studied database, but also provide inspiration for similar Zotero projects. Design/methodology/approach – This pilot study is designed as a qualitative field study. A sample of the project is actual target group was contacted around Copenhagen, Denmark. During a home- or office-visit, a natural search task was defined and executed by the participant on a laptop provided by the instructor. Using TechSmiths Morae usability software, screen, webcam, and voice data was recorded and analyzed; after the recording, a usability survey was filled out. Findings – Despite only two samples, the participants use and judge the three search methods of Zotero differently. Most participants favor the free text search method (1), although the retrieval results are unsatisfactory. In a large-scale, multi-language collection, like the assessed database, browsing in hierarchical categories (2), or faceting results using a tag cloud (3) may be more effective and efficient, but only a minority of participants understands and applies these methods. Furthermore, it appears that the interface lacks intuitive navigation, especially for the non-scientific community. Novice Zotero users not familiar with the concepts of bibliographic databases may fail to differentiate between the Zotero website (the service provider) and the Zotero group (the database, the actual subject of the study). Originality/value – This is the first published usability study of a large-scale Zotero group. It introduces usability issues, regarding search functions and web interface. Besides drawing inspiration from a similar Zotero bibliography, which uses RSS feeds and API interfaces, a few practical ways to enhance user search experience are suggested. The pilot study concludes with suggestions for further research, designed for more reliable participant scales.
Free Culture : how big media uses technology and the law to lock down culture and control creativity
(2004)
The struggle that rages just now centers on two ideas: piracy and property. My aim in this book s next two parts is to explore these two ideas. My method is not the usual method of an academic. I don t want to plunge you into a complex argument, buttressed with references to obscure French theorists however natural that is for the weird sort we academics have become. Instead I begin in each part with a collection of stories that set a context within which these apparently simple ideas can be more fully understood. The two sections set up the core claim of this book: that while the Internet has indeed produced something fantastic and new, our government, pushed by big media to respond to this something new, is destroying something very old.Rather than understanding the changes the Internet might permit, and rather than taking time to let common sense resolve how best to respond, we are allowing those most threatened by the changes to use their power to change the law and more importantly, to use their power to change something fundamental about who we have always been. We allow this, I believe, not because it is right, and not because most of us really believe in these changes.We allow it because the interests most threatened are among the most powerful players in our depressingly compromised process of making law. This book is the story of one more consequence of this form of corruption a consequence to which most of us remain oblivious.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.