Refine
Document Type
- Study Thesis (3) (remove)
Language
- English (3) (remove)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3) (remove)
Keywords
- E-Publishing (1)
- Forschungsdaten (1)
- IPv6 (1)
- JMeter (1)
- Linked Data (1)
- MySQL (1)
- MySQL Cluster (1)
- Research Data (1)
- SSL (1)
- TLS (1)
In order to publish Linked Open Data, the source data has to be prepared. This term paper introduces basic procedures of this publishing process. The focus is on the theoretical process of publishing, aspects of technical realization of this process through different approaches and the description of a first attempt to put the publishing process into practice with some sample data.
Websites or web applications, whether they represent shopping systems, on demand services or a social networks, have something in common: data must be stored somewhere and somehow. This job can be achieved by various solutions with very different performance characteristics, e.g. based on simple data files, databases or high performance RAM storage solutions. For todays popular web applications it is important to handle database operations in a minimum amount of time, because they are struggling with a vast increase in visitors and user generated data. Therefore, a major requirement for modern database application is to handle huge data (also called big data) in a short amount of time and to provide high availability for that data. A very popular database application in the open source community is MySQL, which was originally developed by a swedisch company called MySQL AB and is now maintenanced by Oracle. MySQL is shipped in a bundle with the Apache web server and therefore has a large distribution. This database is easily installed, maintained and administrated. By default MySQL is shipped with the MyISAM storage engine, which has good performance on read requests, but a poor one on massive parallel write requests. With appropriate tuning of various database settings, special architecture setups (replication, partitioning, etc.) or other storage engines, MySQL can be turned into a fast database application. For example Wikipedia uses MySQL for their backend data storage. In the lecture Ultra Large Scale Systems and System Engineering teached by Walter Kriha at Media University Stuttgart, the question Can a MySQL database application handle more then 3000 database requests per second? came up some time. Inspired by this issue, I got myself going to find out, if MySQL is able to handle such a amount of requests per second. At that time I also read something about the high availability and scalability solution MySQL Cluster and it was the right time to test the performance of that solution. In this paper I describe how to set up a MySQL database server with the additional MySQL Cluster storage engine ndbcluster and how to configure a database cluster. In addition I execute some database tests on that cluster to proof that its possible the get a throughput of >= 3000 read requests per second with a MySQL database.
Secure Search
(2011)
Nowadays it is easy to track web users among websites: cookies, web bugs or browser fingerprints are very useful techniques to achieve this. The data collected can be used to derive a specific user profile. This information can be used by third parties to present personalized advertisements while surfing the web. In addition a potential attacker could monitor all web traffic of an user e.g. its search queries. As a conclusion the attacker knows the intentions of the web user and of the company he is working for. As competitors maybe very interested in such information, this could lead to a new form of industrial espionage. In this paper I present some of the techniques commonly used. I illustrate some problems caused by the usage of insecure transmission lines and compromised search engines. Some camouflage techniques presented may help to protect the web users identity. This paper is a based on the lecture "Secure Systems" teached by Professor Walter Kriha at the Media University (HdM) Stuttgart.