Refine
Year of publication
Document Type
- Bachelor Thesis (14)
- Master's Thesis (12)
- Researchpaper (4)
- Periodicalpart (3)
- Study Thesis (3)
- Book (2)
- Diploma Thesis (2)
- Article (1)
- Report (1)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- Computergraphik (2)
- Deep Learning (2)
- Deterministic Lockstep (2)
- Gamification (2)
- International Media Management, International Strategy ,Global Strategy ,Transnational Media Corporations ,Media Management ,International Management (2)
- Maschinelles Lernen (2)
- Networked Games (2)
- Verteilte Systeme (2)
- Visual Effects (2)
- deep learning (2)
Institute
- FB 1: Druck und Medien (19)
- Computer Science and Media (Master) (5)
- FB 2: Electronic Media (4)
- Bibliotheks- und Medienmanagement (Bachelor, Diplom) (3)
- FB 3: Information und Kommunikation (3)
- Audiovisuelle Medien (Bachelor, Diplom) (2)
- Bibliotheks- und Informationsmanagement (Bachelor, Master) (2)
- Medieninformatik (Bachelor, Diplom) (2)
- Bibliothek (1)
- Mobile Medien (Bachelor) (1)
Multiplayer games can increase player enjoyment through social interactions, cooperation, and competition. Their market popularity shows the success of especially networked multiplayer games, which pose new networking challenges to game developers. The main challenge is synchronizing game state across players. Research identifies deterministic lockstep, snapshot interpolation, and state-sync as primary methods for this task, each with distinct advantages and disadvantages.
This work, and the master thesis this paper is based on, quantitatively evaluated deterministic lockstep, demonstrating its vertical (entity count) and horizontal (player count) scaling limitations and compares the method to snapshot interpolation. Lockstep supports minimum 16,000 entities for up to 10 players and a horizontal scaling of 40 or more players with 1024 entities. However, a negative correlation between entity and player count limits was observed, which was indicated by the maximum scaling configurations 30 players with 4096 entities or 20 players with 8192 entities. Snapshot interpolation faced a vertical limit with 4096 entities and 10 players and horizontally with 40 or more players and 1024 entities.
The paper further contributes by comparing results to related work, summarizing synchronization methods, proposing a hybrid architecture model of deterministic lockstep with snapshot interpolation for re-synchronization and hot-joins, and deconstructing Unity Transport Package’s (UTP) network packets.
Multiplayer games can increase player enjoyment through social interactions, cooperation and competition. The popularity of such games is shown by current market trends. Especially networked multiplayer games frequently achieve great success, but confront game developers with additional networking challenges in the already complex field of game production. The primary challenge is game state synchronization across all players. Based on the current research, there are three main methods for this task – deterministic lockstep, snapshot interpolation and state-sync – with their own advantages and disadvantages.
This work quantitatively evaluated and discussed the vertical (entity count) and horizontal (player count) limitations of deterministic lockstep and compared the method to snapshot interpolation. Results showed, that deterministic lockstep has no indicated vertical scaling limitation with a player count of up to 10 supporting 16,000 or more entities. A horizontal scaling limitation could not be found either and lockstep was confirmed to work with 40 or more players while handling 1024 entities. However, both scaling dimensions correlate negatively, which was indicated by the maximum scaling configurations 30 players and 4096 entities or 20 players and 8192 entities.
An unoptimized snapshot interpolation implementation achieved a vertical scaling limitation of 4096 entities with 10 players and a horizontal scaling limit of 40 or more players with 1024 entities and therefore was found to have a lower entity limit compared to deterministic lockstep.
Furthermore, results are compared to related work. Other contributions of this thesis include an overview of game networks and the three game state synchronization techniques. An architecture model for deterministic lockstep including a hybrid approach combining it with snapshot interpolation for re-synchronization and hot-joins. And finally, a network packet deconstruction of the implemented networking framework Unity Transport Package (UTP).
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.
The Eclipse rich client platform as container for componentoriented plugins provides a framework to host plugins, which concerning its look and feelembed well in a client workstation. J2EE client container provide a runtime environment for applications, integrated in a multitier architecture and therefore have to access services Java 2 Enterprise Edition (J2EE). Combining the two container approaches will create a new runtime environment for application clients, which appear in the user interface style of Eclipse and are able to take up the J2EE services. This diploma thesis discusses concepts of combining Eclipse and the client container.
Evaluating a forthcoming international bibliographic research database in form of a Zotero group
(2014)
Purpose – In order to connect the various international research hubs on physical learning spaces, a large-scale research database has been developed, using a Zotero group. Hitherto, its interface and collection index has never been examined for usability. This pilot study attempts to discover what retrieval strategy combinations users apply in the Zotero web interface, and how satisfied they are with the usability and the retrieval outcomes. The results shall not just generate ideas for the improvement of the studied database, but also provide inspiration for similar Zotero projects. Design/methodology/approach – This pilot study is designed as a qualitative field study. A sample of the project is actual target group was contacted around Copenhagen, Denmark. During a home- or office-visit, a natural search task was defined and executed by the participant on a laptop provided by the instructor. Using TechSmiths Morae usability software, screen, webcam, and voice data was recorded and analyzed; after the recording, a usability survey was filled out. Findings – Despite only two samples, the participants use and judge the three search methods of Zotero differently. Most participants favor the free text search method (1), although the retrieval results are unsatisfactory. In a large-scale, multi-language collection, like the assessed database, browsing in hierarchical categories (2), or faceting results using a tag cloud (3) may be more effective and efficient, but only a minority of participants understands and applies these methods. Furthermore, it appears that the interface lacks intuitive navigation, especially for the non-scientific community. Novice Zotero users not familiar with the concepts of bibliographic databases may fail to differentiate between the Zotero website (the service provider) and the Zotero group (the database, the actual subject of the study). Originality/value – This is the first published usability study of a large-scale Zotero group. It introduces usability issues, regarding search functions and web interface. Besides drawing inspiration from a similar Zotero bibliography, which uses RSS feeds and API interfaces, a few practical ways to enhance user search experience are suggested. The pilot study concludes with suggestions for further research, designed for more reliable participant scales.
Free Culture : how big media uses technology and the law to lock down culture and control creativity
(2004)
The struggle that rages just now centers on two ideas: piracy and property. My aim in this book s next two parts is to explore these two ideas. My method is not the usual method of an academic. I don t want to plunge you into a complex argument, buttressed with references to obscure French theorists however natural that is for the weird sort we academics have become. Instead I begin in each part with a collection of stories that set a context within which these apparently simple ideas can be more fully understood. The two sections set up the core claim of this book: that while the Internet has indeed produced something fantastic and new, our government, pushed by big media to respond to this something new, is destroying something very old.Rather than understanding the changes the Internet might permit, and rather than taking time to let common sense resolve how best to respond, we are allowing those most threatened by the changes to use their power to change the law and more importantly, to use their power to change something fundamental about who we have always been. We allow this, I believe, not because it is right, and not because most of us really believe in these changes.We allow it because the interests most threatened are among the most powerful players in our depressingly compromised process of making law. This book is the story of one more consequence of this form of corruption a consequence to which most of us remain oblivious.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.