An Evaluation Platform for Semantic Web Technology

The Internet and the Web provide an environment for business-to-business and business-to-consumer exchanges in a virtual world where distance means very little, providers can advertise their products globally and consumers from all over the world can obtain access to these products.

The vision of the Semantic Web aims at enhancing today’s Web in order to provide a more efficient and reliable environment for both providers and consumers of Web resources (i.e. information and services). To deploy the Semantic Web, various technologies have been developed, such as machine understandable description languages, language parsers, goal matchers, and resource composition algorithms. Since the Semantic Web is just emerging, each technology tends to make assumptions about different aspects of the Semantic Web’s architecture and use, such as the kind of applications that will be deployed, the resource descriptions, the consumers’ and providers’ requirements, and the existence and capabilities of other technologies. In order to ensure the deployment of a robust and useful Semantic Web and the applications that will rely on it, several aspects of the technologies must be investigated, such as whether the assumptions made are reasonable, whether the existing technologies allow construction of a usable Semantic Web, and the systematic identification of which technology to use when designing new applications.In this thesis we provide a means of investigating these aspects for service discovery, which is a critical task in the context of the Semantic Web. We propose a simulation and evaluation platform for evaluating current and future Semantic Web technology with different resource sets and consumer and provider requirements. For this purpose we provide a model to represent the Semantic Web, a model of the evaluation platform, an implementation of the evaluation platform as a multi-agent system…


1 Introduction
1.1 Research problem
1.2 Contributions
1.3 Outline of the thesis
1.4 Relation to previous published work of the author
2 The Semantic Web
2.1 Machine-understandable languages
2.2 Semantic annotation description languages
2.3 Semantic-aware tools
2.4 Semantic Web operations
2.5 Difficulties to overcome for deployment of the Semantic Web
3 Illustrative Scenario
4 Model for the Simulation and Evaluation Platform
4.1 Requirements for a simulation and evaluation platform
4.2 Platform model
4.2.1 Modeling assumptions about the Semantic Web
4.2.2 Modeling the Semantic Web
4.2.3 The platform
5 Implementation of the Simulation and Evaluation Platform
5.1 Support for the operation component
5.2 Evaluation support
5.3 Settings
5.4 A multi-agent system
5.5 Related work
6 sButler: a Requester Agent
6.1 Organizational workflows
6.2 A Model for the integration of organizational workflows and the Semantic Web
6.3 sButler architecture
7.1 The problem of query generation for service retrieval
7.2 The DTP logical view
7.3 Definition of the DTP language extension
7.4 Ontologies for the DTP language extension
7.4.1 The MIT process handbook as a source of knowledge
7.4.2 A conceptual structure for the MIT process handbook
7.4.3 Specifying constraints on Activity concepts
7.4.4 Using the MIT process handbook as a knowledge resource on business processes
7.4.5 Using the DTP language extension to describe queries and Web services
7.5 Matchmaking with the DTP language extension
7.5.1 Matching categories
7.5.2 Different matchmaking approaches
7.5.3 The current matchmaking approaches are not satisfactory
7.5.4 Using the existing matchmaking algorithm
7.7 Comparison of OWL-DTP, OWL-S and WSMO
7.7.1 OWL-S
7.7.2 WSMO
7.7.3 Comparison method
7.7.4 Test suite
7.7.5 Expressing queries with OWL-S, WSMO, and OWL-DTP
7.7.6 Discussion
8 Prototype Implementation of sButler making use of OWL-DTP
9 Platform: Illustration of Use, Evaluation, and Lessons Learned
9.1 Illustration
9.1.1 Assumptions
9.1.2 Integrating the assumptions in the evaluation platform
9.1.3 Evaluation of the service discovery approach
9.1.4 Platform evaluation
9.2 Lessons learned
10 Conclusion and Future Work

Author: Åberg, Cécile

Source: Linköping University

Download URL 2: Visit Now

Leave a Comment