Refine
Year of publication
Document Type
- Bachelor Thesis (14)
- Master's Thesis (12)
- Researchpaper (4)
- Periodicalpart (3)
- Study Thesis (3)
- Book (2)
- Diploma Thesis (2)
- Article (1)
- Report (1)
Language
- English (42) (remove)
Has Fulltext
- yes (42)
Is part of the Bibliography
- no (42)
Keywords
- Computergraphik (2)
- Deep Learning (2)
- Deterministic Lockstep (2)
- Gamification (2)
- International Media Management, International Strategy ,Global Strategy ,Transnational Media Corporations ,Media Management ,International Management (2)
- Maschinelles Lernen (2)
- Networked Games (2)
- Verteilte Systeme (2)
- Visual Effects (2)
- deep learning (2)
Institute
- FB 1: Druck und Medien (19)
- Computer Science and Media (Master) (5)
- FB 2: Electronic Media (4)
- Bibliotheks- und Medienmanagement (Bachelor, Diplom) (3)
- FB 3: Information und Kommunikation (3)
- Audiovisuelle Medien (Bachelor, Diplom) (2)
- Bibliotheks- und Informationsmanagement (Bachelor, Master) (2)
- Medieninformatik (Bachelor, Diplom) (2)
- Bibliothek (1)
- Mobile Medien (Bachelor) (1)
Multiplayer games can increase player enjoyment through social interactions, cooperation and competition. The popularity of such games is shown by current market trends. Especially networked multiplayer games frequently achieve great success, but confront game developers with additional networking challenges in the already complex field of game production. The primary challenge is game state synchronization across all players. Based on the current research, there are three main methods for this task – deterministic lockstep, snapshot interpolation and state-sync – with their own advantages and disadvantages.
This work quantitatively evaluated and discussed the vertical (entity count) and horizontal (player count) limitations of deterministic lockstep and compared the method to snapshot interpolation. Results showed, that deterministic lockstep has no indicated vertical scaling limitation with a player count of up to 10 supporting 16,000 or more entities. A horizontal scaling limitation could not be found either and lockstep was confirmed to work with 40 or more players while handling 1024 entities. However, both scaling dimensions correlate negatively, which was indicated by the maximum scaling configurations 30 players and 4096 entities or 20 players and 8192 entities.
An unoptimized snapshot interpolation implementation achieved a vertical scaling limitation of 4096 entities with 10 players and a horizontal scaling limit of 40 or more players with 1024 entities and therefore was found to have a lower entity limit compared to deterministic lockstep.
Furthermore, results are compared to related work. Other contributions of this thesis include an overview of game networks and the three game state synchronization techniques. An architecture model for deterministic lockstep including a hybrid approach combining it with snapshot interpolation for re-synchronization and hot-joins. And finally, a network packet deconstruction of the implemented networking framework Unity Transport Package (UTP).
Multiplayer games can increase player enjoyment through social interactions, cooperation, and competition. Their market popularity shows the success of especially networked multiplayer games, which pose new networking challenges to game developers. The main challenge is synchronizing game state across players. Research identifies deterministic lockstep, snapshot interpolation, and state-sync as primary methods for this task, each with distinct advantages and disadvantages.
This work, and the master thesis this paper is based on, quantitatively evaluated deterministic lockstep, demonstrating its vertical (entity count) and horizontal (player count) scaling limitations and compares the method to snapshot interpolation. Lockstep supports minimum 16,000 entities for up to 10 players and a horizontal scaling of 40 or more players with 1024 entities. However, a negative correlation between entity and player count limits was observed, which was indicated by the maximum scaling configurations 30 players with 4096 entities or 20 players with 8192 entities. Snapshot interpolation faced a vertical limit with 4096 entities and 10 players and horizontally with 40 or more players and 1024 entities.
The paper further contributes by comparing results to related work, summarizing synchronization methods, proposing a hybrid architecture model of deterministic lockstep with snapshot interpolation for re-synchronization and hot-joins, and deconstructing Unity Transport Package’s (UTP) network packets.
Today’s digital cameras use a mosaic of red, green, and blue color filters to capture images in three color channels on a single sensor plane. This thesis investigates the use of convolutional neural networks (CNNs) for demosaicing – the process of reconstructing full-color images from raw mosaic sensor data. While there are existing CNNs for demosaicing raw images from the well-established regular Bayer color filter array (CFA), this thesis focuses on how they perform on alternative non-regular sampling patterns that produce less aliasing artifacts, namely the stochastic Gaussian- and the RandomQuarter sampling pattern (Backes and Fröhlich, 2020).
A basic UNet (Ronneberger et al., 2015) and the spatially adaptive SANet (T. Zhang et al., 2022) are implemented in a supervised training pipeline based on the PixelShift200 image dataset (Qian et al., 2021) to investigate their suitability for the irregular demosaicing task. The experiments indicate that the basic UNet encounters difficulties in restoring the missing color values, whereas the spatially adaptive convolutional layers help in processing the irregularly sampled raw images.
In addition, this thesis enhances SANet effectiveness by employing an alternative residual branch based on a CFA-normalized Gaussian filter, as well as a tileable modification to the Gaussian CFA pattern. The modified SANet is shown to outperform the conventional dFSR algorithm (Backes & Fröhlich, 2020) in terms of peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM).
Password-based authentication is widely used online, despite its numerous shortcomings, enabling attackers to take over users’ accounts. Phishing-resistant Fast IDentity Online (FIDO) credentials have therefore been proposed to improve account security and authentication user experience. With the recent introduction of FIDO-based passkeys, industry-leading corporations aim to drive widespread adoption of passwordless authentication to eliminate some of the most common account takeover attacks their users are exposed to. This thesis presents the first iteration of a distributed web crawler measuring the adoption of FIDO-based authentication methods on the web to observe ongoing developments and assess the viability of the promised passwordless future. The feasibility of automatically detecting authentication methods is investigated by analyzing crawled web content. Because today’s web is increasingly client-side rendered, capturing relevant data with traditional scraping methods is challenging. Thus, the traditional approach is compared to the browser-based crawling of dynamic content to optimize the detection rate. The results show that authentication method detection is possible, although there are some limitations regarding accuracy and coverage. Moreover, browser-based crawling is found to significantly increase detection rate.
The idea ist quite obvious. Anyone studying „Media Creation & Management“ as part of an international minor program should not just learn about international management topics and international media markets in theory, but also engage in their own media project as part of an international team of students – in this particular case, writing and editing as well as layout and production of a magazine on the topic of international media management.
This is exactly what 39 students of the International Media Management class did during the summer term 2023. And the result is the magazine you are now holding in your hands. The students looked at topics related to international media management from various perspectives, analyzed markets and dealt with international digital and media companies – sometimes using management tools, sometimes in a more scientific and sometimes in an entertaining way. The result is a magazine that is directed at students as well as lectures and those responsible for international exchange programs at universities.
Did the students catch your interest? You can find more information about the minor program “Media Creation & Management” at Stuttgart Media University (Hochschule der Medien) and the idea of studying in Stuttgart in this magazine.
The idea is quite obvious. Anyone studying “Media Creation
& Management” as part of an international minor
program should not just learn about international management
topics and international media markets in theory,
but also engage in their own media project as part
of an international team of students – in this particular
case, writing and editing as well as layout and production
of a magazine on the topic of international media
management.
This is exactly what 50 students of the International
Media Management class did during the winter term
2022/2023. And the result is the magazine you are now
holding in your hands. The students looked at topics related
to international media management from various
perspectives, analyzed markets and dealt with international
digital and media companies – sometimes using
management tools, sometimes in a more scientific and
sometimes in an entertaining way. The result is a magazine
that is directed at students as well as lecturers and
those responsible for international exchange programs
at universities.
Did the students catch your interest? You can find more
information about the minor program “Media Creation &
Management” at Stuttgart Media University (Hochschule
der Medien) and the idea of studying in Stuttgart in this
magazine or oline with the top QR-Code on the left.
Kind regards and see you in Stuttgart.
Yours
Uwe Eisenbeis
Prof. Dr. Uwe Eisenbeis
Dean of Studies, Program Media Management
The idea is quite obvious. Anyone studying “Media
Creation & Management” as part of an international
minor program should not just learn about international
management topics and international media
markets in theory, but also engage in their own media
project as part of an international team of students
– in this particular case, writing and editing as well as
layout and production of a magazine on the topic of
international media management.
This is exactly what 50 students of the International
Media Management class did during the summer term
2022. And the result is the magazine you are now
holding in your hands. The students looked at topics
related to international media management from
various perspectives, analyzed markets and dealt with
international digital and media companies – sometimes
using management tools, sometimes in a more
scientifi c and sometimes in an entertaining way. The
result is a magazine that is directed at students as well
as lecturers and those responsible for international
exchange programs at universities.
Did the students catch your interest? You can fi nd
more information about the minor program “Media
Creation & Management” at Stuttgart Media University
(Hochschule der Medien) and the idea of studying in
Stuttgart in this magazine or oline with the top QRCode
on the left.
Kind regards and see you in Stuttgart.
Yours
Uwe Eisenbeis
This study investigates the possibility of using Bartle’s player types for gamification
in the context of language learning apps. By taking user preferences into
account, this might assist in selecting the most suitable game elements. Learning
apps are gaining popularity as an innovative method for obtaining an independent
and flexible learning experience. Gamification keeps users motivated and involved
with the content.
After the research on the usage of gamification and its effects on the user, a language
learning app prototype was created. The evaluation consisted of a user test with
interview questions and the short User Experience Questionnaire (UEQ). The Bartle
test of gamer psychology was used to determine the player types of the participants.
The results show that, while player type and gamification preference can partially
coincide, there are too many deviations to confidently say it can be transferred into
gamification contexts. We conclude that game elements should not be chosen based
on a user’s Bartle player type and are more effectively used by incorporating a variety
of different gamification components.
The number of people with cognitive impairments increases together with the aging population. Thus, social robots are being researched to aid relieve the nursing
sector as well as to combat cognitive impairments. However, it raises concerns regarding how a social robot should relate to members of this group and what might
be appropriate. In this thesis, research about the current state of social robots has been conducted and focus groups with people from the nursing and medical field were held. To verify the credibility of the results and the scenario developed, final
user tests were conducted with representatives of the target group. When using a
social robot in an interaction with persons who have cognitive disabilities, the robot
should speak and behave more human-like and make use of its facial expressions,
stressing empathy and responding to the person accordingly. Though the situation
of interacting with a social robot may be more significant in future generations.
Virtual-reality (VR) is an immersive technology with a growing market and many applications for gesture recognition. This thesis presents a VR gesture recognition method using signal processing techniques. The core concept is based on the comparison of motion features in the form of signals between a runtime recording of users and a possible gesture set. This comparison yields a similarity score through which the most similar gesture can be recognized by a continuous recognition system. Some selected comparison methods are presented, evaluated and discussed. An example implementation is demonstrated. However, due to an introduced layer model parts of the method and its implementation are interchangeable.
Similar or even better performance is achieved compared to other related work. The comparison method Dynamic Time Warping (DTW) reaches an average positive recognitions rate of 98.18% with acceptable real-time application performance. Additionally, the method comes with some benefits: position and direction of users is irrelevant, body proportions have no significant negative impact on recognition rates, faster and slower gesture executions are possible, no user inputs are needed to communicate gesture start and end (continuous recognition), also continuous gestures can be recognized, and the recognition is fast enough to trigger gesture specific events already during the execution.
The increasing availability of online video content, partially fueled by the Covid-19 pandemic and the growing presence of social media, adds to the importance of providing audio descriptions as a media alternative to video content for blind and visually impaired people. In order to address concerns as to what can be sufficiently described and how such descriptions can be delivered to users, a concept has been developed providing audio descriptions in multiple levels of detail. Relevant information is incorporated into an XML-based data structure. The concept also includes a process to provide optional explanations to terms and abbreviations, helping users without specific knowledge or people with cognitive concerns in comprehending complex videos. These features are implemented into a prototype based on the Able Player software. By conducting a user test, the benefits of multi-layered audio descriptions and optional explanatory content are evaluated. Findings suggest that the choice of several levels of detail is received positively. Users acknowledged the concept of explanations played parallelly to the video and described further use cases for such a practice. Participants preferred a higher level of detail for a high-paced action video and a lower level for informative content. Possibilities to extend the data structure and features include multilanguage use cases and distributed systems.
Climate change is one of the greatest societal challenges of our time. The global food production alone accounts for 26% of global greenhouse gas emissions. Without dietary changes, the challenges of climate protection can hardly be achieved in the food sector. Technology has the ability to significantly change society and it can be used to change people’s attitude or behaviors.
The current study investigated the potential of using Persuasive Technology for guiding consumers to implement sustainable food choices. To evaluate its impact, an online grocery store was designed and prototyped using the Persuasive Systems Design model according to Oinas-Kukkonen and Harjumaa. The intended target behavior was to adjust food choices and make sustainable consumption decisions. The target group consisted of individuals between the ages 20 and 34 years.
The iterative approach of the empirical study was divided into four parts: First, the requirements of the target group were analyzed. Then a concept of the grocery online shop was developed using the design principles of the Persuasive Systems Design model. The concept Foodprint was prototypically implemented and consequently, evaluated via A/B testing with target users. Two high-fidelity prototypes were similarly structured with the only difference that Prototype A contained persuasive elements. Prototype B was intended to collect comparative data in the user tests. Ten individuals of the target group evaluated the prototypes and their impressions of the concept and food choices were examined to assess the impact of the Persuasive Systems Design model.
The data were analyzed qualitatively as well as quantitatively. Prototype A – with the persuasive elements – showed a more positive user experience. The evaluation of tests A and B revealed that the persuasive elements were able to influence users to identify sustainable food options.
In general, it can be concluded that testers from both tests, A and B, rated the grocery online store as helpful and would be willing using it in the future. However, it became also evident that the target group lacked knowledge to make informed decisions about the environmental impact of their food choices. As observed in the current study, the participants considered it difficult to assess the sustainability level of foods when grocery shopping. Their purchasing decisions relied on labels and erroneous assumptions. These observations indicate the need for more support in making sustainable food choices.
The Persuasive Systems Design model had the potential to influence the users in their food choices, suggesting that it may be an option to contribute to environmental protection in the food sector. Over time, consumers may even become more aware of the impact of their food choices and hence, could adjust their purchasing behavior in stationary retail stores.
Video games have a significant influence on our time. However, lack of accessibility makes it hard for disabled gamers to play most of them. Virtual reality offers new possibilities to include people with disabilities and enable them to play games. Additionally, serious VR games provide educational benefits, such as improved memory and engagement.
In this work, the accessibility problems in video games and VR applications are explored with an emphasis on serious games as well as a general lack of guidelines. An overview of existing guidelines is given. From this, a set of guidelines is derived that summarizes the relevant rules for accessible VR games.
New ways to interact with VR environments come with both opportunities and challenges. This work investigates the applicability of different hands-free input methods to play a VR game. Using a serious game five focus and three activation methods were implemented exemplary with the Oculus Go. The suitability of these methods was analyzed in a pre-study that excluded head movements for controlling the game. The remaining input methods were evaluated in an explorative user study in terms of operability and ease of use.In summary, all tested methods can be used to control the game. The evaluation shows head-tracking as the preferred input method, while scanning eye-tracking and voice control were rated mediocre.
In addition, the correlation between input methods and different menu types was examined, but the influence turned out to be negligible.
Web Accessibility is becoming increasingly important. Guidelines and according tests were created in order to ensure Web Accessibility for everyone. Detailed reports are created in order to advise content creators on this topic. However, these reports can be even more elaborate than the guidelines themselves with their very specific and technical vocabulary and their sheer length. This makes it hard, especially for non-experts, to understand what the results mean and to know where to start.
StroCards is a functional prototype developed to help viewers of Web Accessibility reports understand their contents easier. One way of doing this is by sorting and filtering identified accessibility issues. It can generate charts from the number of failed, passed and not applicable success criteria that highlight aspects that are not explained in the report itself. It can explain the user how well each of the tested website performs in terms of accessibility regarding different responsibilities. One of its key features is generating individual reports for individual responsibilities like e.g. visual design. With this functionality a designer like in this example, could receive a list of issues that are relevant to them without being overwhelmed by issues that they cannot solve. This creates a more efficient handling of the report. Besides displaying the report highlighting project roles, StroCards can have a more human-centered and empathetic approach by showing which user groups are affected and therefore excluded by accessibility issues on the website. This makes the long list of guidelines more tangible – especially for non-experts.
In the process of developing StroCards, some design decisions were made with a group of experts. The implemented functional prototype was tested in a qualitative and quantitative user study. It was perceived as easier to understand and better to work with.
A tool like this could wildly help people maintaining, creating, and developing websites to put these Web Accessibility guidelines into practice and consequently minimize exclusion of people from websites.
The capabilities of Artificial Intelligence (AI) are utilized increasingly
in today‘s world. The autonomous and adaptive characteristics
allow applications to be more effective and efficient. A certain
subfield of Artificial Intelligence, Machine Learning, is enabling
services to be tailored to a user‘s specific needs. This could prove to
be useful in an information-heavy field such as Statistics. As design
research from SPSS Statistics, a legacy statistical application, has
indicated, statistics beginners struggle to tackle the challenge of
preparing a statistical research study. They turn to several sources
of information in an attempt to find help and answers but are not
always successful. This leads to them being unconfident before
they have even started to execute the statistical study. The adaptive
features of Artificial Intelligence could help support students
in this case, if designed according to established principles. This
thesis investigated the question whether an AI-powered solution
could elevate the users‘ confidence in statistical research studies.
In order to find the answer, a prototype with exemplary User Experience
was designed and implemented. Preceding research determined
the domain and market offer. User research was conducted
to ensure a human-centered outcome. The prototype was evaluated
with real test users and the results answered the question in
the affirmative.
The legitimacy of users is of great importance for the security of information systems. The authentication process is a trade-off between system security and user experience. E.g., forced password complexity or multi-factor authentication can increase protection, but the application becomes more cumbersome for the users. Therefore, it makes sense to investigate whether the identity of a user can be verified reliably enough, without his active participation, to replace or supplement existing login processes.
This master thesis examines if the inertial sensors of a smartphone can be leveraged to continuously determine whether the device is currently in possession of its legitimate owner or by another person. To this end, an approach proposed in related studies will be implemented and examined in detail. This approach is based on the use of a so-called Siamese artificial neural network to transform the measured values of the sensors into a new vector that can be classified more reliably.
It is demonstrated that the reported results of the proposed approach can be reproduced under certain conditions. However, if the same model is used under conditions that are closer to a real-world application, its reliability decreases significantly. Therefore, a variant of the proposed approach is derived whose results are superior to the original model under real conditions.
The thesis concludes with concrete recommendations for further development of the model and provides methodological suggestions for improving the quality of research in the topic of "Continuous Authentication".
Privacy in Social Networks
(2016)
Online Social Networks (OSNs) are heavily used today and despite of all privacy concerns found a way into our daily life. After showing how heavy data collection is a violation of the user's privacy, this thesis establishes mandatory and optional requirements for a Privacy orientated Online Social Network (POSN). It evaluates twelve existing POSNs in general and in regard to those requirements. The paper will find that none of these POSNs are able to fulfill the requirements and therefore proposes features and patterns as a reference architecture.
Head Mounted Displays (HMD) are increasingly used in various industries. But apart from the industry environment, the potentials of HMDs in a private environment like at home has been rel- atively unexplored so far. What daily tasks can these help with, in the home kitchen for example?
The aim of this thesis is to obtain knowledge about the usefulness of such an HMD, the HoloLens, in combination with an application, while following a new recipe. Therefore a prototype applica- tion for the HoloLens got developed which guides a user through the cooking of a sushi burger by using multimedia content.
With a mixed method design, consisting of quantitative and qualitative methods, the HoloLens in combination with an application was evaluated by 14 participants.
Not only the weight of the device was a problem for users. The test also revealed that the display is darkening the view and participants tend to look below the glasses. An advantage is indeed to reach the next cooking step without the need of using hands and always having in sight what needs to be done next. Positive feedback was given as well for the application. Through voice control the user communicates to a character which will guide through the recipe by videos and text.
If in future the technical characteristics of HMD devices will improve, an application in this con- text will be of advantage in order to simplify learning a new recipe. This device, in combination with an application, could help early-middle stage cognitive impaired people and blind people to cook.
Massively Multiplayer Online Games (MMOGs) are increasing in both popularity and scale.
One of the reasons for this is that interacting with human counterparts is typically considered much more interesting than playing against an Artificial Intelligence.
Although the visual quality of game worlds has increased over the past years,they often fall short in providing consistency with regard to behavior and interactivity.
This is especially true for the game worlds of MMOGs. One way of making a game world feel more alive is to implement a Fire Propagation System that defines show fire spreads in the game world. Singleplayer games like Far Cry 2 and The Legend of Zelda:
Breath of the Wild already feature implementations of such a system. As far as the author of this thesis knows, however, noMMOGwith an implemented Fire Propagation System has been released yet. This work introduces two approaches for developing such a system for a MMOG with a client-server architecture.
It was implemented using the proprietary game engine Snowdrop. The approaches presented in this thesis can be used as a basis for developing a Fire Propagation System and can be adjusted easily to fit the needs of a specific project.
Large-scale computing platforms, like the IBM System z mainframe, are often administrated in an out-of-band manner, with a large portion of the systems management software running on dedicated servers which cause extra hardware costs. Splitting up systems management applications into smaller services and spreading them over the platform itself likewise is an approach that potentially helps with increasing the utilization of platform-internal resources, while at the same time lowering the need for external server hardware, which would reduce the extra costs significantly. However, with regard to IBM System z, this raises the general question how a great number of critical services can be run and managed reliably on a heterogeneous computing landscape, as out-of-band servers and internal processor modules do not share the same processor architecture.
In this thesis, we introduce our prototypical design of a microservice infrastructure for multi-architecture environments, which we completely built upon preexisting open source projects and features they already bring along. We present how scheduling of services according to application-specific requirements and particularities can be achieved in a way that offers maximum transparency and comfort for platform operators and users.
Nowadays more and more companies use agile software development to build software in short release cycles. Monolithic applications are split into microservices, which can independently be maintained and deployed by agile teams. Modern platforms like Docker support this process. Docker offers services to containerize such services and orchestrate them in a container cluster. A software supply chain is the umbrella term for the process of developing, automated building and testing, as well as deploying a complete application. By combining a software supply chain and Docker, those processes can be automated in standardized environments. Since Docker is a young technology and software supply chains are critical processes in organizations, security needs to be reviewed. In this work a software supply chain based on Docker is built and a threat modeling process is used to assess its security. The main components are modeled and threats are identified using STRIDE. Afterwards risks are calculated and methods to secure the software supply chain based on security objectives confidentiality, integrity and availability are discussed. As a result, some components require special treatments in security context since they have a high residual risk of being targeted by an attacker. This work can be used as basis to build and secure the main components of a software supply chain. However additional components such as logging, monitoring as well as integration into existing business processes need to be reviewed.
Before gas is transported, natural gas traders have to plan with many contracts every day. If a cost-optimized solution is sought the most attractive contracts of a large contract set have to be selected. This kind of cost-optimization is also known as day-ahead balancing problem. In this work it is shown that it is possible to express this problem as a linear program that considers important influences and restrictions in the daily trading.
The aspects of the day-ahead balancing problem are examined and modelled individually. This way a basic linear program is gradually adapted towards a realistic mathematical formulation. The resulting linear optimization problem is implemented as a prototype that considers the discussed aspects of a cost-optimized contract selection.
Concepts and Services for Asylum Seekers in Public Libraries Using the Example of Germany and Norway
(2016)
The goal of the following bachelor thesis is to introduce concepts of public libraries concerning asylum seekers. As an example the thesis is using public libraries in Germany and Norway. Therefore, the reader will be introduced to the general situation, living conditions and preconditions of asylum seekers in both countries as well as to preconditions of libraries and librarians concerning monetary and territorial aspects and education of library staff. Important international library representatives as well as local actors will be introduced and the importance of cooperation between libraries and other organizations will be examined. In the main part practical methods, services, offers and ways of how libraries can help asylum seekers will be elaborated and possibilities how asylum seekers can actively participate in the library will be explained. Challenges which can occur will be detected and elaborated. Furthermore, the public library of Bergen in Norway and the public library of Duisburg in Germany will be presented as best practice examples.
Deep learning methods have proven highly effective for object recognition tasks, especially
in the form of artificial neural networks. In this bachelor’s thesis, a way is shown to imple-
ment a ready-to-use object recognition implementation on the NAO robotic platform using
Convolutional Neural Networks based on pretrained models. Recognition of multiple objects
at once is realized with the help of the Multibox algorithm. The implementation’s object
recognition rates are evaluated and analyzed in several tests.
Furthermore, the implementation offers a graphical user interface with several options to
adjust the recognition process and for controlling movements of the robot’s head in order
to easier acquire objects in the field of view. Additionally, a dialogue system for querying
further results is presented.
In recent years new trends such as industry 4.0 boosted the research and
development in the field of autonomous systems and robotics. Robots collaborate and
even take over complete tasks of humans. But the high degree of automation requires
high reliability even in complex and changing environments. Those challenging
conditions make it hard to rely on static models of the real world. In addition to
adaptable maps, mobile robots require a local and current understanding of the scene.
The Bosch Start-Up Company is developing robots for intra-logistic systems, which
could highly benefit from such a detailed scene understanding. The aim of this work
is to research and develop such a system for warehouse environments. While the
possible field of application is in general very broad, this work will focus on the
detection and localization of warehouse specific objects such as palettes.
In order to provide a meaningful perception of the surrounding a RGB-D camera is
used. A pre-trained convolutional network extracts scene understanding in the form
of pixelwise class labels. As this convolutional network is the core of the application,
this work focuses on different network set-ups and learning strategies. One difficulty
was the lack of annotated training data. Since the creation of densely labeled images
is a very time consuming process it was important to elaborate on good alternatives.
One interesting finding was that it’s possible to transfer learning to a high extent from
similar models pre-trained on thousands of RGB-images. This is done by selective
interventions on the net parameters. By ensuring a good initialization it’s possible
to train towards a well performing model within few iterations. In this way it’s
possible to train even branched nets at once. This can also be achieved by including
certain normalization steps. Another important aspect was to find a suitable way
to incorporate depth-information. How to fuse depth into the existing model? By
providing the height over ground as an additional feature the segmentation accuracy
was further improved while keeping the extra computational costs low.
Finally the segmentation maps are refined by a conditional random field. The joint
training of both parts results in accurate object segmentations comparable to recently
published state-of-the-art models.
When searching for bugs in Java enterprise applications, an essential part of the
eort consists in redeploying the source code and relaunching the server over and
over. In order to improve this situation, this thesis suggests the implementation
of a runtime debugging tool. The tool's purpose is to facilitate the enrichment of
operating application code with logging statements, which are inteded to generate
additional output concerning the webapp's current state. On behalf of this
so-called instrumentation, the actual process of debugging could be supported
and accelerated without having to interrupt the server's execution.
Due to the signicance of Java EE as well as Spring for today's enterprise development,
the implementation of a dedicated debugging tool for each platform
shall be covered. Both solutions pursue the same goal, but dier in the approach
and the programming paradigm forming their basis. This document introduces
their implementation details and evaluates them against a specication that de-
nes the general conditions and expectations in terms of the capabilities of a
satisfying result.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.
Talking about highly scalable and reliable sys-
tems, issues like logging and monitoring are often
disregarded. However, being able to manage to-
day’s software systems absolutely requires deep
knowledge about the current state of applications
as well as the underlying infrastructure. Extract-
ing and preparing debug information as well as
various metrics in a fast and clearly arranged
manner is an essential precondition in order to
handle this task.
Since we at Bertsch Innovation GmbH also
face increasing requirements concerning Media-
Cockpit as one of our core products, we decided
to establish a centralized logging infrastructure
in order to come up to the application’s evolution
towards a more and more distributed system.
In this paper, I want to describe the steps
that I have taken in order to setup a functioning
logging tool stack consisting of Elasticsearch,
Logstash and Kibana (usually abbreviated as ELK stack ). Besides outlining proper
setup and configuration, I will also discuss possi-
ble pitfalls as well as custom adjustments made
when ELK did not meet our demands.
Innovative architecture and networks for learner-centred, local education and life-long-learning are receiving growing attention. Yet, practitioners still require practical guidance, given the challenge of involving and interacting with new and diverse stake-holder groups, such as architects and politicians, or the community at large. With the goal of advancing scientific and practical frameworks, this thesis approaches how stakeholders in ‘education-centred urban development’ (ECUD) can be helped to accomplish mutual understanding and more effective communication and interaction during planning.
Assuming the organizational theory of ‘networked governance’ (NG), a literature re-view is conducted across ‘institutional learning space development’ (ILSD) and the ‘learning city / region’ discourse (LCR), in order to discuss stakeholder involvement in planning. Six key themes are summarized and tested against a case study of ‘Hume Global Learning Village’ (HGLV), Australia, using a document analysis and expert online interviews.
The review finds the following themes: First, the concepts of ILSD and ECUD can be very abstract to comprehend, and stakeholders’ varied understandings of ‘learning’ demands an open, continuous dialogue. Next, individual leadership needs to initiate a vision, and multiply buy-in and followers. Securing sustainable funding sources is a precondition to foster participation and commitment. Long-standing organizational ‘silo-thinking’ has to be opened up towards cultures of sharing, collaboration, and innovation. Facilitation capacities are crucial to provide an inclusive planning process where con-sent and commitment is fostered. Lastly, change and positive learning effects may take a long time to show – this expectation has to be internalized by all stakeholders.
Despite few optimal interview sources, the case study confirms the themes, and illustrates that excess leadership can ensure the other conditions. This suggests that the six themes can serve as a framework for practitioners to conduct successful stake-holder involvement in planning. However, they are not unique among good-case literature. Moreover, the review shows a literature gap in how a suitable degree of stakeholder involvement can be selected. It is recommended to consolidate the various, alterna-tive planning processes and models, and further triangulate local experiences, in order to close this gap and derive more comprehensive and universal tools for practitioners.
This bachelor thesis wants to describe a prototypical implementation of a 3D user interface for intuitive real-time set editing in virtual production. Furthermore this approach is evaluated qualitatively through a user group, testing the device and fill in a questionnaire. The dimension of virtual elements created with computer graphics technology in all areas of entertainment industry is steadily growing since the past years. Nevertheless can the editing process of virtual elements still require a costly process in terms of time and money. With the appearance of new input devices and improved tracking technologies it is interesting to evaluate if a real-time editing process could improve this situation. Being currently bound to experts on special workstations, this could lead to a more intuitive and real-time workflow, enabling everybody on a film set to influence the digital editing process and work collaboratively on the scene consisting of virtual and real elements.
Evaluating a forthcoming international bibliographic research database in form of a Zotero group
(2014)
Purpose – In order to connect the various international research hubs on physical learning spaces, a large-scale research database has been developed, using a Zotero group. Hitherto, its interface and collection index has never been examined for usability. This pilot study attempts to discover what retrieval strategy combinations users apply in the Zotero web interface, and how satisfied they are with the usability and the retrieval outcomes. The results shall not just generate ideas for the improvement of the studied database, but also provide inspiration for similar Zotero projects. Design/methodology/approach – This pilot study is designed as a qualitative field study. A sample of the project is actual target group was contacted around Copenhagen, Denmark. During a home- or office-visit, a natural search task was defined and executed by the participant on a laptop provided by the instructor. Using TechSmiths Morae usability software, screen, webcam, and voice data was recorded and analyzed; after the recording, a usability survey was filled out. Findings – Despite only two samples, the participants use and judge the three search methods of Zotero differently. Most participants favor the free text search method (1), although the retrieval results are unsatisfactory. In a large-scale, multi-language collection, like the assessed database, browsing in hierarchical categories (2), or faceting results using a tag cloud (3) may be more effective and efficient, but only a minority of participants understands and applies these methods. Furthermore, it appears that the interface lacks intuitive navigation, especially for the non-scientific community. Novice Zotero users not familiar with the concepts of bibliographic databases may fail to differentiate between the Zotero website (the service provider) and the Zotero group (the database, the actual subject of the study). Originality/value – This is the first published usability study of a large-scale Zotero group. It introduces usability issues, regarding search functions and web interface. Besides drawing inspiration from a similar Zotero bibliography, which uses RSS feeds and API interfaces, a few practical ways to enhance user search experience are suggested. The pilot study concludes with suggestions for further research, designed for more reliable participant scales.
The publication culture on Urban Agriculture (UA) is nearly exclusively inhabited by idealist and practitioner proponents. Foremost the economics (oftentimes influenced by Marxism) dare to critique the sustainability of the movement. In short, the people that start a UA project eventually require help from their city through recognition and policy support. The full breadth of intentions of these people are principally unknown, and this hinders policy design, in turn. Investigating these rationales (using Skot-Hansens Five Es (2005)) is the scope of this paper. It identifies a number of necessary policy changes, but ultimately pinpoints that it requires the involvement of activists, NGOs, and individual UA champions to raise awareness and to participate in policy design and implementation. It is found that, in one or the other way, most UA proponents motives can be traced back to a facet of community empowerment. Amongst the variety of rationales, especially the non-capitalist culture of UA is said to further its sustainability (not just in economic terms), because it brings forth a culture that embodies the said empowerment and shapes a democratic, inclusive sharing community. Hence, UA is identified as a strategy for urban cultural regeneration.
In order to publish Linked Open Data, the source data has to be prepared. This term paper introduces basic procedures of this publishing process. The focus is on the theoretical process of publishing, aspects of technical realization of this process through different approaches and the description of a first attempt to put the publishing process into practice with some sample data.
The goal of this thesis is to develop a novel type of virtual heritage medium that utilises the combined immersive and engaging potentials of interactive mixed reality environments and spatial narratives. Concretely, this is achieved through depth-sensitive compositing of real-time 3D content into the live-video of a tracked smartphone. The user can explore this mixed reality environment, watch the actions of staged 3D characters as well as interact with them and virtual artifacts. This medium would therefore provide possibilities for telling stories in direct context with existing environments along with an immersive and engaging media experience. This work will mainly focus on how this medium can be used as an edutainment medium in sites of cultural heritage. This thesis will focus on establishing the technical requirements and realisation possibilities for implementation in Unity on iPhone 5 / iOS 7. Subsequently, a prototype is implemented in order to prove the research results.
With the increasing use of visual effects in feature films, TV series and commercials, flexibility becomes essential to create astonishing pictures while meeting tight production schedules. Deep image compositing introduces new possibilities that increase flexibility and solve old problems of depth based compositing. The following thesis gives an introduction to deep image compositing, illustrating its power and analyzing its use in a modern visual effects pipeline.
Websites or web applications, whether they represent shopping systems, on demand services or a social networks, have something in common: data must be stored somewhere and somehow. This job can be achieved by various solutions with very different performance characteristics, e.g. based on simple data files, databases or high performance RAM storage solutions. For todays popular web applications it is important to handle database operations in a minimum amount of time, because they are struggling with a vast increase in visitors and user generated data. Therefore, a major requirement for modern database application is to handle huge data (also called big data) in a short amount of time and to provide high availability for that data. A very popular database application in the open source community is MySQL, which was originally developed by a swedisch company called MySQL AB and is now maintenanced by Oracle. MySQL is shipped in a bundle with the Apache web server and therefore has a large distribution. This database is easily installed, maintained and administrated. By default MySQL is shipped with the MyISAM storage engine, which has good performance on read requests, but a poor one on massive parallel write requests. With appropriate tuning of various database settings, special architecture setups (replication, partitioning, etc.) or other storage engines, MySQL can be turned into a fast database application. For example Wikipedia uses MySQL for their backend data storage. In the lecture Ultra Large Scale Systems and System Engineering teached by Walter Kriha at Media University Stuttgart, the question Can a MySQL database application handle more then 3000 database requests per second? came up some time. Inspired by this issue, I got myself going to find out, if MySQL is able to handle such a amount of requests per second. At that time I also read something about the high availability and scalability solution MySQL Cluster and it was the right time to test the performance of that solution. In this paper I describe how to set up a MySQL database server with the additional MySQL Cluster storage engine ndbcluster and how to configure a database cluster. In addition I execute some database tests on that cluster to proof that its possible the get a throughput of >= 3000 read requests per second with a MySQL database.
Secure Search
(2011)
Nowadays it is easy to track web users among websites: cookies, web bugs or browser fingerprints are very useful techniques to achieve this. The data collected can be used to derive a specific user profile. This information can be used by third parties to present personalized advertisements while surfing the web. In addition a potential attacker could monitor all web traffic of an user e.g. its search queries. As a conclusion the attacker knows the intentions of the web user and of the company he is working for. As competitors maybe very interested in such information, this could lead to a new form of industrial espionage. In this paper I present some of the techniques commonly used. I illustrate some problems caused by the usage of insecure transmission lines and compromised search engines. Some camouflage techniques presented may help to protect the web users identity. This paper is a based on the lecture "Secure Systems" teached by Professor Walter Kriha at the Media University (HdM) Stuttgart.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.
The Eclipse rich client platform as container for componentoriented plugins provides a framework to host plugins, which concerning its look and feelembed well in a client workstation. J2EE client container provide a runtime environment for applications, integrated in a multitier architecture and therefore have to access services Java 2 Enterprise Edition (J2EE). Combining the two container approaches will create a new runtime environment for application clients, which appear in the user interface style of Eclipse and are able to take up the J2EE services. This diploma thesis discusses concepts of combining Eclipse and the client container.
This report offers a survey of the methods that are being deployed at leading digital libraries to assess the use and usability of their online collections and services. Focusing on 24 Digital Library Federation member libraries, the study's author, Distinguished DLF Fellow Denise Troll Covey, conducted numerous interviews with library professionals who are engaged in assessment. The report describes the application, strengths, and weaknesses of assessment techniques that include surveys, focus groups, user protocols, and transaction log analysis. Covey's work is also an essential methodological guidebook. For each method that she covers, she is careful to supply a definition, explain why and how libraries use the method, what they do with the results, and what problems they encounter. The report includes an extensive bibliography on more detailed methodological information, and descriptions of assessment instruments that have proved particularly effective.
Free Culture : how big media uses technology and the law to lock down culture and control creativity
(2004)
The struggle that rages just now centers on two ideas: piracy and property. My aim in this book s next two parts is to explore these two ideas. My method is not the usual method of an academic. I don t want to plunge you into a complex argument, buttressed with references to obscure French theorists however natural that is for the weird sort we academics have become. Instead I begin in each part with a collection of stories that set a context within which these apparently simple ideas can be more fully understood. The two sections set up the core claim of this book: that while the Internet has indeed produced something fantastic and new, our government, pushed by big media to respond to this something new, is destroying something very old.Rather than understanding the changes the Internet might permit, and rather than taking time to let common sense resolve how best to respond, we are allowing those most threatened by the changes to use their power to change the law and more importantly, to use their power to change something fundamental about who we have always been. We allow this, I believe, not because it is right, and not because most of us really believe in these changes.We allow it because the interests most threatened are among the most powerful players in our depressingly compromised process of making law. This book is the story of one more consequence of this form of corruption a consequence to which most of us remain oblivious.
Diese Diplomarbeit beschreibt den Prozess bei der Herstellung einer Web-Site für einen Verein. Der Projekt wird eingeschätzt, die Zielgruppe und die Bedürfnisse des Vereins werden definiert, um einen Konzept für die Web-Site herauszufinden. Dann werden die Zeitplanung, die Tätigkeitplanung und den Begriff Teammanagement im Kontext analysiert. Der dritte Teil bechreibt die getroffenen Entscheidungen bei den Screen- und Interface-Design. Im vierten findet man Information über die Installierung des Sites ins Netz, über die Anmeldungsmethoden an Suchmaschinen.und über die Aktualisierung.