Refine
Year of publication
Document Type
- Bachelor Thesis (14)
- Master's Thesis (12)
- Researchpaper (4)
- Periodicalpart (3)
- Study Thesis (3)
- Book (2)
- Diploma Thesis (2)
- Article (1)
- Report (1)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- Computergraphik (2)
- Deep Learning (2)
- Deterministic Lockstep (2)
- Gamification (2)
- International Media Management, International Strategy ,Global Strategy ,Transnational Media Corporations ,Media Management ,International Management (2)
- Maschinelles Lernen (2)
- Networked Games (2)
- Verteilte Systeme (2)
- Visual Effects (2)
- deep learning (2)
Institute
- FB 1: Druck und Medien (19)
- Computer Science and Media (Master) (5)
- FB 2: Electronic Media (4)
- Bibliotheks- und Medienmanagement (Bachelor, Diplom) (3)
- FB 3: Information und Kommunikation (3)
- Audiovisuelle Medien (Bachelor, Diplom) (2)
- Bibliotheks- und Informationsmanagement (Bachelor, Master) (2)
- Medieninformatik (Bachelor, Diplom) (2)
- Bibliothek (1)
- Mobile Medien (Bachelor) (1)
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
The publication culture on Urban Agriculture (UA) is nearly exclusively inhabited by idealist and practitioner proponents. Foremost the economics (oftentimes influenced by Marxism) dare to critique the sustainability of the movement. In short, the people that start a UA project eventually require help from their city through recognition and policy support. The full breadth of intentions of these people are principally unknown, and this hinders policy design, in turn. Investigating these rationales (using Skot-Hansens Five Es (2005)) is the scope of this paper. It identifies a number of necessary policy changes, but ultimately pinpoints that it requires the involvement of activists, NGOs, and individual UA champions to raise awareness and to participate in policy design and implementation. It is found that, in one or the other way, most UA proponents motives can be traced back to a facet of community empowerment. Amongst the variety of rationales, especially the non-capitalist culture of UA is said to further its sustainability (not just in economic terms), because it brings forth a culture that embodies the said empowerment and shapes a democratic, inclusive sharing community. Hence, UA is identified as a strategy for urban cultural regeneration.
Evaluating a forthcoming international bibliographic research database in form of a Zotero group
(2014)
Purpose – In order to connect the various international research hubs on physical learning spaces, a large-scale research database has been developed, using a Zotero group. Hitherto, its interface and collection index has never been examined for usability. This pilot study attempts to discover what retrieval strategy combinations users apply in the Zotero web interface, and how satisfied they are with the usability and the retrieval outcomes. The results shall not just generate ideas for the improvement of the studied database, but also provide inspiration for similar Zotero projects. Design/methodology/approach – This pilot study is designed as a qualitative field study. A sample of the project is actual target group was contacted around Copenhagen, Denmark. During a home- or office-visit, a natural search task was defined and executed by the participant on a laptop provided by the instructor. Using TechSmiths Morae usability software, screen, webcam, and voice data was recorded and analyzed; after the recording, a usability survey was filled out. Findings – Despite only two samples, the participants use and judge the three search methods of Zotero differently. Most participants favor the free text search method (1), although the retrieval results are unsatisfactory. In a large-scale, multi-language collection, like the assessed database, browsing in hierarchical categories (2), or faceting results using a tag cloud (3) may be more effective and efficient, but only a minority of participants understands and applies these methods. Furthermore, it appears that the interface lacks intuitive navigation, especially for the non-scientific community. Novice Zotero users not familiar with the concepts of bibliographic databases may fail to differentiate between the Zotero website (the service provider) and the Zotero group (the database, the actual subject of the study). Originality/value – This is the first published usability study of a large-scale Zotero group. It introduces usability issues, regarding search functions and web interface. Besides drawing inspiration from a similar Zotero bibliography, which uses RSS feeds and API interfaces, a few practical ways to enhance user search experience are suggested. The pilot study concludes with suggestions for further research, designed for more reliable participant scales.
Innovative architecture and networks for learner-centred, local education and life-long-learning are receiving growing attention. Yet, practitioners still require practical guidance, given the challenge of involving and interacting with new and diverse stake-holder groups, such as architects and politicians, or the community at large. With the goal of advancing scientific and practical frameworks, this thesis approaches how stakeholders in ‘education-centred urban development’ (ECUD) can be helped to accomplish mutual understanding and more effective communication and interaction during planning.
Assuming the organizational theory of ‘networked governance’ (NG), a literature re-view is conducted across ‘institutional learning space development’ (ILSD) and the ‘learning city / region’ discourse (LCR), in order to discuss stakeholder involvement in planning. Six key themes are summarized and tested against a case study of ‘Hume Global Learning Village’ (HGLV), Australia, using a document analysis and expert online interviews.
The review finds the following themes: First, the concepts of ILSD and ECUD can be very abstract to comprehend, and stakeholders’ varied understandings of ‘learning’ demands an open, continuous dialogue. Next, individual leadership needs to initiate a vision, and multiply buy-in and followers. Securing sustainable funding sources is a precondition to foster participation and commitment. Long-standing organizational ‘silo-thinking’ has to be opened up towards cultures of sharing, collaboration, and innovation. Facilitation capacities are crucial to provide an inclusive planning process where con-sent and commitment is fostered. Lastly, change and positive learning effects may take a long time to show – this expectation has to be internalized by all stakeholders.
Despite few optimal interview sources, the case study confirms the themes, and illustrates that excess leadership can ensure the other conditions. This suggests that the six themes can serve as a framework for practitioners to conduct successful stake-holder involvement in planning. However, they are not unique among good-case literature. Moreover, the review shows a literature gap in how a suitable degree of stakeholder involvement can be selected. It is recommended to consolidate the various, alterna-tive planning processes and models, and further triangulate local experiences, in order to close this gap and derive more comprehensive and universal tools for practitioners.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.
Today’s digital cameras use a mosaic of red, green, and blue color filters to capture images in three color channels on a single sensor plane. This thesis investigates the use of convolutional neural networks (CNNs) for demosaicing – the process of reconstructing full-color images from raw mosaic sensor data. While there are existing CNNs for demosaicing raw images from the well-established regular Bayer color filter array (CFA), this thesis focuses on how they perform on alternative non-regular sampling patterns that produce less aliasing artifacts, namely the stochastic Gaussian- and the RandomQuarter sampling pattern (Backes and Fröhlich, 2020).
A basic UNet (Ronneberger et al., 2015) and the spatially adaptive SANet (T. Zhang et al., 2022) are implemented in a supervised training pipeline based on the PixelShift200 image dataset (Qian et al., 2021) to investigate their suitability for the irregular demosaicing task. The experiments indicate that the basic UNet encounters difficulties in restoring the missing color values, whereas the spatially adaptive convolutional layers help in processing the irregularly sampled raw images.
In addition, this thesis enhances SANet effectiveness by employing an alternative residual branch based on a CFA-normalized Gaussian filter, as well as a tileable modification to the Gaussian CFA pattern. The modified SANet is shown to outperform the conventional dFSR algorithm (Backes & Fröhlich, 2020) in terms of peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM).
Web Accessibility is becoming increasingly important. Guidelines and according tests were created in order to ensure Web Accessibility for everyone. Detailed reports are created in order to advise content creators on this topic. However, these reports can be even more elaborate than the guidelines themselves with their very specific and technical vocabulary and their sheer length. This makes it hard, especially for non-experts, to understand what the results mean and to know where to start.
StroCards is a functional prototype developed to help viewers of Web Accessibility reports understand their contents easier. One way of doing this is by sorting and filtering identified accessibility issues. It can generate charts from the number of failed, passed and not applicable success criteria that highlight aspects that are not explained in the report itself. It can explain the user how well each of the tested website performs in terms of accessibility regarding different responsibilities. One of its key features is generating individual reports for individual responsibilities like e.g. visual design. With this functionality a designer like in this example, could receive a list of issues that are relevant to them without being overwhelmed by issues that they cannot solve. This creates a more efficient handling of the report. Besides displaying the report highlighting project roles, StroCards can have a more human-centered and empathetic approach by showing which user groups are affected and therefore excluded by accessibility issues on the website. This makes the long list of guidelines more tangible – especially for non-experts.
In the process of developing StroCards, some design decisions were made with a group of experts. The implemented functional prototype was tested in a qualitative and quantitative user study. It was perceived as easier to understand and better to work with.
A tool like this could wildly help people maintaining, creating, and developing websites to put these Web Accessibility guidelines into practice and consequently minimize exclusion of people from websites.
Privacy in Social Networks
(2016)
Online Social Networks (OSNs) are heavily used today and despite of all privacy concerns found a way into our daily life. After showing how heavy data collection is a violation of the user's privacy, this thesis establishes mandatory and optional requirements for a Privacy orientated Online Social Network (POSN). It evaluates twelve existing POSNs in general and in regard to those requirements. The paper will find that none of these POSNs are able to fulfill the requirements and therefore proposes features and patterns as a reference architecture.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.
This bachelor thesis wants to describe a prototypical implementation of a 3D user interface for intuitive real-time set editing in virtual production. Furthermore this approach is evaluated qualitatively through a user group, testing the device and fill in a questionnaire. The dimension of virtual elements created with computer graphics technology in all areas of entertainment industry is steadily growing since the past years. Nevertheless can the editing process of virtual elements still require a costly process in terms of time and money. With the appearance of new input devices and improved tracking technologies it is interesting to evaluate if a real-time editing process could improve this situation. Being currently bound to experts on special workstations, this could lead to a more intuitive and real-time workflow, enabling everybody on a film set to influence the digital editing process and work collaboratively on the scene consisting of virtual and real elements.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
The Eclipse rich client platform as container for componentoriented plugins provides a framework to host plugins, which concerning its look and feelembed well in a client workstation. J2EE client container provide a runtime environment for applications, integrated in a multitier architecture and therefore have to access services Java 2 Enterprise Edition (J2EE). Combining the two container approaches will create a new runtime environment for application clients, which appear in the user interface style of Eclipse and are able to take up the J2EE services. This diploma thesis discusses concepts of combining Eclipse and the client container.
Diese Diplomarbeit beschreibt den Prozess bei der Herstellung einer Web-Site für einen Verein. Der Projekt wird eingeschätzt, die Zielgruppe und die Bedürfnisse des Vereins werden definiert, um einen Konzept für die Web-Site herauszufinden. Dann werden die Zeitplanung, die Tätigkeitplanung und den Begriff Teammanagement im Kontext analysiert. Der dritte Teil bechreibt die getroffenen Entscheidungen bei den Screen- und Interface-Design. Im vierten findet man Information über die Installierung des Sites ins Netz, über die Anmeldungsmethoden an Suchmaschinen.und über die Aktualisierung.
Websites or web applications, whether they represent shopping systems, on demand services or a social networks, have something in common: data must be stored somewhere and somehow. This job can be achieved by various solutions with very different performance characteristics, e.g. based on simple data files, databases or high performance RAM storage solutions. For todays popular web applications it is important to handle database operations in a minimum amount of time, because they are struggling with a vast increase in visitors and user generated data. Therefore, a major requirement for modern database application is to handle huge data (also called big data) in a short amount of time and to provide high availability for that data. A very popular database application in the open source community is MySQL, which was originally developed by a swedisch company called MySQL AB and is now maintenanced by Oracle. MySQL is shipped in a bundle with the Apache web server and therefore has a large distribution. This database is easily installed, maintained and administrated. By default MySQL is shipped with the MyISAM storage engine, which has good performance on read requests, but a poor one on massive parallel write requests. With appropriate tuning of various database settings, special architecture setups (replication, partitioning, etc.) or other storage engines, MySQL can be turned into a fast database application. For example Wikipedia uses MySQL for their backend data storage. In the lecture Ultra Large Scale Systems and System Engineering teached by Walter Kriha at Media University Stuttgart, the question Can a MySQL database application handle more then 3000 database requests per second? came up some time. Inspired by this issue, I got myself going to find out, if MySQL is able to handle such a amount of requests per second. At that time I also read something about the high availability and scalability solution MySQL Cluster and it was the right time to test the performance of that solution. In this paper I describe how to set up a MySQL database server with the additional MySQL Cluster storage engine ndbcluster and how to configure a database cluster. In addition I execute some database tests on that cluster to proof that its possible the get a throughput of >= 3000 read requests per second with a MySQL database.
Secure Search
(2011)
Nowadays it is easy to track web users among websites: cookies, web bugs or browser fingerprints are very useful techniques to achieve this. The data collected can be used to derive a specific user profile. This information can be used by third parties to present personalized advertisements while surfing the web. In addition a potential attacker could monitor all web traffic of an user e.g. its search queries. As a conclusion the attacker knows the intentions of the web user and of the company he is working for. As competitors maybe very interested in such information, this could lead to a new form of industrial espionage. In this paper I present some of the techniques commonly used. I illustrate some problems caused by the usage of insecure transmission lines and compromised search engines. Some camouflage techniques presented may help to protect the web users identity. This paper is a based on the lecture "Secure Systems" teached by Professor Walter Kriha at the Media University (HdM) Stuttgart.
Deep learning methods have proven highly effective for object recognition tasks, especially
in the form of artificial neural networks. In this bachelor’s thesis, a way is shown to imple-
ment a ready-to-use object recognition implementation on the NAO robotic platform using
Convolutional Neural Networks based on pretrained models. Recognition of multiple objects
at once is realized with the help of the Multibox algorithm. The implementation’s object
recognition rates are evaluated and analyzed in several tests.
Furthermore, the implementation offers a graphical user interface with several options to
adjust the recognition process and for controlling movements of the robot’s head in order
to easier acquire objects in the field of view. Additionally, a dialogue system for querying
further results is presented.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
Virtual-reality (VR) is an immersive technology with a growing market and many applications for gesture recognition. This thesis presents a VR gesture recognition method using signal processing techniques. The core concept is based on the comparison of motion features in the form of signals between a runtime recording of users and a possible gesture set. This comparison yields a similarity score through which the most similar gesture can be recognized by a continuous recognition system. Some selected comparison methods are presented, evaluated and discussed. An example implementation is demonstrated. However, due to an introduced layer model parts of the method and its implementation are interchangeable.
Similar or even better performance is achieved compared to other related work. The comparison method Dynamic Time Warping (DTW) reaches an average positive recognitions rate of 98.18% with acceptable real-time application performance. Additionally, the method comes with some benefits: position and direction of users is irrelevant, body proportions have no significant negative impact on recognition rates, faster and slower gesture executions are possible, no user inputs are needed to communicate gesture start and end (continuous recognition), also continuous gestures can be recognized, and the recognition is fast enough to trigger gesture specific events already during the execution.
Multiplayer games can increase player enjoyment through social interactions, cooperation, and competition. Their market popularity shows the success of especially networked multiplayer games, which pose new networking challenges to game developers. The main challenge is synchronizing game state across players. Research identifies deterministic lockstep, snapshot interpolation, and state-sync as primary methods for this task, each with distinct advantages and disadvantages.
This work, and the master thesis this paper is based on, quantitatively evaluated deterministic lockstep, demonstrating its vertical (entity count) and horizontal (player count) scaling limitations and compares the method to snapshot interpolation. Lockstep supports minimum 16,000 entities for up to 10 players and a horizontal scaling of 40 or more players with 1024 entities. However, a negative correlation between entity and player count limits was observed, which was indicated by the maximum scaling configurations 30 players with 4096 entities or 20 players with 8192 entities. Snapshot interpolation faced a vertical limit with 4096 entities and 10 players and horizontally with 40 or more players and 1024 entities.
The paper further contributes by comparing results to related work, summarizing synchronization methods, proposing a hybrid architecture model of deterministic lockstep with snapshot interpolation for re-synchronization and hot-joins, and deconstructing Unity Transport Package’s (UTP) network packets.