Refine
Document Type
- Master's Thesis (2)
- Study Thesis (2)
- Researchpaper (1)
Language
- English (5) (remove)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Augmented Reality (1)
- Codierung <Programmierung> (1)
- Distributed Code Review (1)
- Gerrit (1)
- Google Inc. (1)
- HoloLens (1)
- IPv6 (1)
- JMeter (1)
- MySQL (1)
- MySQL Cluster (1)
Institute
- Computer Science and Media (Master) (5) (remove)
Web Accessibility is becoming increasingly important. Guidelines and according tests were created in order to ensure Web Accessibility for everyone. Detailed reports are created in order to advise content creators on this topic. However, these reports can be even more elaborate than the guidelines themselves with their very specific and technical vocabulary and their sheer length. This makes it hard, especially for non-experts, to understand what the results mean and to know where to start.
StroCards is a functional prototype developed to help viewers of Web Accessibility reports understand their contents easier. One way of doing this is by sorting and filtering identified accessibility issues. It can generate charts from the number of failed, passed and not applicable success criteria that highlight aspects that are not explained in the report itself. It can explain the user how well each of the tested website performs in terms of accessibility regarding different responsibilities. One of its key features is generating individual reports for individual responsibilities like e.g. visual design. With this functionality a designer like in this example, could receive a list of issues that are relevant to them without being overwhelmed by issues that they cannot solve. This creates a more efficient handling of the report. Besides displaying the report highlighting project roles, StroCards can have a more human-centered and empathetic approach by showing which user groups are affected and therefore excluded by accessibility issues on the website. This makes the long list of guidelines more tangible – especially for non-experts.
In the process of developing StroCards, some design decisions were made with a group of experts. The implemented functional prototype was tested in a qualitative and quantitative user study. It was perceived as easier to understand and better to work with.
A tool like this could wildly help people maintaining, creating, and developing websites to put these Web Accessibility guidelines into practice and consequently minimize exclusion of people from websites.
Head Mounted Displays (HMD) are increasingly used in various industries. But apart from the industry environment, the potentials of HMDs in a private environment like at home has been rel- atively unexplored so far. What daily tasks can these help with, in the home kitchen for example?
The aim of this thesis is to obtain knowledge about the usefulness of such an HMD, the HoloLens, in combination with an application, while following a new recipe. Therefore a prototype applica- tion for the HoloLens got developed which guides a user through the cooking of a sushi burger by using multimedia content.
With a mixed method design, consisting of quantitative and qualitative methods, the HoloLens in combination with an application was evaluated by 14 participants.
Not only the weight of the device was a problem for users. The test also revealed that the display is darkening the view and participants tend to look below the glasses. An advantage is indeed to reach the next cooking step without the need of using hands and always having in sight what needs to be done next. Positive feedback was given as well for the application. Through voice control the user communicates to a character which will guide through the recipe by videos and text.
If in future the technical characteristics of HMD devices will improve, an application in this con- text will be of advantage in order to simplify learning a new recipe. This device, in combination with an application, could help early-middle stage cognitive impaired people and blind people to cook.
Websites or web applications, whether they represent shopping systems, on demand services or a social networks, have something in common: data must be stored somewhere and somehow. This job can be achieved by various solutions with very different performance characteristics, e.g. based on simple data files, databases or high performance RAM storage solutions. For todays popular web applications it is important to handle database operations in a minimum amount of time, because they are struggling with a vast increase in visitors and user generated data. Therefore, a major requirement for modern database application is to handle huge data (also called big data) in a short amount of time and to provide high availability for that data. A very popular database application in the open source community is MySQL, which was originally developed by a swedisch company called MySQL AB and is now maintenanced by Oracle. MySQL is shipped in a bundle with the Apache web server and therefore has a large distribution. This database is easily installed, maintained and administrated. By default MySQL is shipped with the MyISAM storage engine, which has good performance on read requests, but a poor one on massive parallel write requests. With appropriate tuning of various database settings, special architecture setups (replication, partitioning, etc.) or other storage engines, MySQL can be turned into a fast database application. For example Wikipedia uses MySQL for their backend data storage. In the lecture Ultra Large Scale Systems and System Engineering teached by Walter Kriha at Media University Stuttgart, the question Can a MySQL database application handle more then 3000 database requests per second? came up some time. Inspired by this issue, I got myself going to find out, if MySQL is able to handle such a amount of requests per second. At that time I also read something about the high availability and scalability solution MySQL Cluster and it was the right time to test the performance of that solution. In this paper I describe how to set up a MySQL database server with the additional MySQL Cluster storage engine ndbcluster and how to configure a database cluster. In addition I execute some database tests on that cluster to proof that its possible the get a throughput of >= 3000 read requests per second with a MySQL database.
Secure Search
(2011)
Nowadays it is easy to track web users among websites: cookies, web bugs or browser fingerprints are very useful techniques to achieve this. The data collected can be used to derive a specific user profile. This information can be used by third parties to present personalized advertisements while surfing the web. In addition a potential attacker could monitor all web traffic of an user e.g. its search queries. As a conclusion the attacker knows the intentions of the web user and of the company he is working for. As competitors maybe very interested in such information, this could lead to a new form of industrial espionage. In this paper I present some of the techniques commonly used. I illustrate some problems caused by the usage of insecure transmission lines and compromised search engines. Some camouflage techniques presented may help to protect the web users identity. This paper is a based on the lecture "Secure Systems" teached by Professor Walter Kriha at the Media University (HdM) Stuttgart.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.