Refine
Year of publication
Document Type
- Bachelor Thesis (13)
- Master's Thesis (11)
- Periodicalpart (3)
- Study Thesis (3)
- Researchpaper (3)
- Book (2)
- Diploma Thesis (2)
- Article (1)
- Report (1)
Language
- English (39) (remove)
Has Fulltext
- yes (39)
Is part of the Bibliography
- no (39)
Keywords
- Computergraphik (2)
- Deep Learning (2)
- Gamification (2)
- International Media Management, International Strategy ,Global Strategy ,Transnational Media Corporations ,Media Management ,International Management (2)
- Maschinelles Lernen (2)
- Verteilte Systeme (2)
- Visual Effects (2)
- object recognition (2)
- Accessibility (1)
- Adoption (1)
Institute
- FB 1: Druck und Medien (16)
- Computer Science and Media (Master) (5)
- FB 2: Electronic Media (4)
- Bibliotheks- und Medienmanagement (Bachelor, Diplom) (3)
- FB 3: Information und Kommunikation (3)
- Audiovisuelle Medien (Bachelor, Diplom) (2)
- Bibliotheks- und Informationsmanagement (Bachelor, Master) (2)
- Medieninformatik (Bachelor, Diplom) (2)
- Bibliothek (1)
- Mobile Medien (Bachelor) (1)
Diese Diplomarbeit beschreibt den Prozess bei der Herstellung einer Web-Site für einen Verein. Der Projekt wird eingeschätzt, die Zielgruppe und die Bedürfnisse des Vereins werden definiert, um einen Konzept für die Web-Site herauszufinden. Dann werden die Zeitplanung, die Tätigkeitplanung und den Begriff Teammanagement im Kontext analysiert. Der dritte Teil bechreibt die getroffenen Entscheidungen bei den Screen- und Interface-Design. Im vierten findet man Information über die Installierung des Sites ins Netz, über die Anmeldungsmethoden an Suchmaschinen.und über die Aktualisierung.
Free Culture : how big media uses technology and the law to lock down culture and control creativity
(2004)
The struggle that rages just now centers on two ideas: piracy and property. My aim in this book s next two parts is to explore these two ideas. My method is not the usual method of an academic. I don t want to plunge you into a complex argument, buttressed with references to obscure French theorists however natural that is for the weird sort we academics have become. Instead I begin in each part with a collection of stories that set a context within which these apparently simple ideas can be more fully understood. The two sections set up the core claim of this book: that while the Internet has indeed produced something fantastic and new, our government, pushed by big media to respond to this something new, is destroying something very old.Rather than understanding the changes the Internet might permit, and rather than taking time to let common sense resolve how best to respond, we are allowing those most threatened by the changes to use their power to change the law and more importantly, to use their power to change something fundamental about who we have always been. We allow this, I believe, not because it is right, and not because most of us really believe in these changes.We allow it because the interests most threatened are among the most powerful players in our depressingly compromised process of making law. This book is the story of one more consequence of this form of corruption a consequence to which most of us remain oblivious.
The Eclipse rich client platform as container for componentoriented plugins provides a framework to host plugins, which concerning its look and feelembed well in a client workstation. J2EE client container provide a runtime environment for applications, integrated in a multitier architecture and therefore have to access services Java 2 Enterprise Edition (J2EE). Combining the two container approaches will create a new runtime environment for application clients, which appear in the user interface style of Eclipse and are able to take up the J2EE services. This diploma thesis discusses concepts of combining Eclipse and the client container.
This report offers a survey of the methods that are being deployed at leading digital libraries to assess the use and usability of their online collections and services. Focusing on 24 Digital Library Federation member libraries, the study's author, Distinguished DLF Fellow Denise Troll Covey, conducted numerous interviews with library professionals who are engaged in assessment. The report describes the application, strengths, and weaknesses of assessment techniques that include surveys, focus groups, user protocols, and transaction log analysis. Covey's work is also an essential methodological guidebook. For each method that she covers, she is careful to supply a definition, explain why and how libraries use the method, what they do with the results, and what problems they encounter. The report includes an extensive bibliography on more detailed methodological information, and descriptions of assessment instruments that have proved particularly effective.
By now GPUs have become powerful general purpose processors that found their way not only into desktop systems but also supercomputers. To use GPUs efficiently one needs to understand their basic architecture and their limitations. We take a look at how GPUs evolved and how they differ from CPUs to gain a deeper understanding of the workloads well suited for GPUs.
This paper gives an overview of the advantages and weaknesses of distributed source code review tools in software engineering. We cover this topic with a specific focus on Google’s freely available software Gerrit. In chapter 1 we discuss how code-reviews are generally useful for groups of programmers. We lay out how traditional approaches differ from distributed setups where developers may be vastly distributed from a geographical point of view or where meetings are otherwise contraindicated. In chapter 2 we discuss how users can interact with Gerrit, and chapter 3 covers some basic knowledge for those people who have to administer one or more Gerrit installations. Finally, chapter 4 summarizes key points and gives an outlook on the future role of distributed code-review.
Secure Search
(2011)
Nowadays it is easy to track web users among websites: cookies, web bugs or browser fingerprints are very useful techniques to achieve this. The data collected can be used to derive a specific user profile. This information can be used by third parties to present personalized advertisements while surfing the web. In addition a potential attacker could monitor all web traffic of an user e.g. its search queries. As a conclusion the attacker knows the intentions of the web user and of the company he is working for. As competitors maybe very interested in such information, this could lead to a new form of industrial espionage. In this paper I present some of the techniques commonly used. I illustrate some problems caused by the usage of insecure transmission lines and compromised search engines. Some camouflage techniques presented may help to protect the web users identity. This paper is a based on the lecture "Secure Systems" teached by Professor Walter Kriha at the Media University (HdM) Stuttgart.
Websites or web applications, whether they represent shopping systems, on demand services or a social networks, have something in common: data must be stored somewhere and somehow. This job can be achieved by various solutions with very different performance characteristics, e.g. based on simple data files, databases or high performance RAM storage solutions. For todays popular web applications it is important to handle database operations in a minimum amount of time, because they are struggling with a vast increase in visitors and user generated data. Therefore, a major requirement for modern database application is to handle huge data (also called big data) in a short amount of time and to provide high availability for that data. A very popular database application in the open source community is MySQL, which was originally developed by a swedisch company called MySQL AB and is now maintenanced by Oracle. MySQL is shipped in a bundle with the Apache web server and therefore has a large distribution. This database is easily installed, maintained and administrated. By default MySQL is shipped with the MyISAM storage engine, which has good performance on read requests, but a poor one on massive parallel write requests. With appropriate tuning of various database settings, special architecture setups (replication, partitioning, etc.) or other storage engines, MySQL can be turned into a fast database application. For example Wikipedia uses MySQL for their backend data storage. In the lecture Ultra Large Scale Systems and System Engineering teached by Walter Kriha at Media University Stuttgart, the question Can a MySQL database application handle more then 3000 database requests per second? came up some time. Inspired by this issue, I got myself going to find out, if MySQL is able to handle such a amount of requests per second. At that time I also read something about the high availability and scalability solution MySQL Cluster and it was the right time to test the performance of that solution. In this paper I describe how to set up a MySQL database server with the additional MySQL Cluster storage engine ndbcluster and how to configure a database cluster. In addition I execute some database tests on that cluster to proof that its possible the get a throughput of >= 3000 read requests per second with a MySQL database.
With the increasing use of visual effects in feature films, TV series and commercials, flexibility becomes essential to create astonishing pictures while meeting tight production schedules. Deep image compositing introduces new possibilities that increase flexibility and solve old problems of depth based compositing. The following thesis gives an introduction to deep image compositing, illustrating its power and analyzing its use in a modern visual effects pipeline.
In order to publish Linked Open Data, the source data has to be prepared. This term paper introduces basic procedures of this publishing process. The focus is on the theoretical process of publishing, aspects of technical realization of this process through different approaches and the description of a first attempt to put the publishing process into practice with some sample data.