Goal-based testing of semantic web services

https://doi.org/10.1016/j.infsof.2016.11.011Get rights and content

Highlights

  • A framework for testing SWS from a user perspective based on WSMO goals.

  • Translation of WSMO goal specifications into B formal models.

  • Model checking for auto-generation of test cases from goal specifications.

  • Independent to model-based generation, test case evaluation via mutation analysis.

  • Tool implementation to automate the proposed steps for the test case evaluation.

Abstract

Context: Recent years have witnessed growing interests in semantic web and its related technologies. While various frameworks have been proposed for designing semantic web services (SWS), few of them aim at testing.

Objective: This paper investigates into the technologies for automatically deriving test cases from semantic web service descriptions based on the Web Service Modeling Ontology (WSMO) framework.

Method: WSMO goal specifications were translated into B abstract machines. Test cases were generated via model checking with calculated trap properties from coverage criteria. Furthermore, we employed mutation analysis to evaluate the test suite. In this approach, the model-based test case generation and code-based evaluation techniques are independent of each other, which provides much more accurate measures of the testing results.

Results: We applied our approach to a real-world case study of the Amazon E-Commerce Service (ECS). The experimental results have validated the effectiveness of the proposed solution.

Conclusion: It is concluded that our approach is capable of automatically generating an effective set of test cases from the WSMO goal descriptions for SWS testing. The quality of test cases was measured in terms of their abilities to discover the injected faults at the code level. We implemented a tool to automate the steps for the mutation-based evaluation.

Introduction

The past decade has witnessed a rapid progress in Web Services (WS), which provide an open platform with a set of XML-based standards. However, due to the fact that XML is only a syntax-based notation, it can not be used to document complicated contextual relationships among the web entities and be fully understood by software tools. To this extend, semantic web services (SWS) add meta data and semantics to the present WS technologies using the ontology description languages. Since the ontology associates the strict meaning of terms, it is easier for software tools to understand and more accurately process SWS.

While various frameworks have been proposed for the design and development of SWS, few of them aim at SWS testing. In this paper, we study the issue from an end user’s perspective. Traditionally, a web service is written and provided by a web service provider, together with the corresponding web service specification, e.g., the WSDL or OWL-S description. The web service specification is used by the web service tester for designing test cases. The test cases are then executed on the web service and the test result report is generated. One could observe that during the whole process of web service testing, the end user does not have an input in any phase. The reason why there is no place for the end user during testing is that web service specification and end user goal are not distinguished from the very beginning in the design of traditional web service architecture. As a consequence, the web service tester is checking if the web service meets the provider’s own specification, rather than the user requirements. Apparently, since a provider’s web service specification only describes the behavior of web services from his/her own point of view, it may not necessarily coincide with the end user’s understanding/expectation of that web service. Thus, the testing of web services based solely on the provider’s specification is unable to ensure the correctness of web services from the end user’s perspective. The term correctness here means that it will fulfil all of the user requests and thus, satisfy the user requirements completely. In any software testing approach, the objective is to ensure the correctness of the software for the end users who use it. However, since the present approaches of web service testing are mainly based on the provider’s specification, they may not ensure correctness for the end users.

In 2005, the Web Service Modeling Ontology (WSMO) was introduced as a conceptual framework for semantically describing all relevant aspects of SWS in order to facilitate the automation of services over the Web [4]. In particular, WSMO explicitly separates web service specifications from the user requirements with the help of the top level element, called the WSMO goal, in its conceptual framework. Thus, in the scenario of WSMO based SWS testing, we do have a chance to check if the web service really meets the user requirements.

In this paper, we focus on two parts of goal based SWS testing: test case generation and evaluation. Specifically, we apply model checking to derive test cases for SWS from goal specifications. Since the quantity of test cases does not necessarily add value to the testing, it is important to measure their quality. Therefore, to make the subsequent test case evaluation independent of the model-based generation approach, we proposed a mutation-based solution for such a purpose. Furthermore, to make this approach scalable and practical, we implement a tool to automate all the evaluation steps. Finally, we applied our approaches to the well-known Amazon E-Commerce Service, a web service that exposes Amazon’s product data and E-Commerce functionality. The case study validated the effectiveness of the proposed solution and the usefulness of the implemented tool.

To realize goal based testing of SWS, we use the WSMO framework. Fig. 1 gives the overview of the proposed solution. In order to understand how it works, we need to understand the three different roles involved: the web service tester, the domain expert or an ontology engineer, and a web service user. The web service tester is responsible for web service testing and quality assurance. This is done by generating concrete test cases and executing them on the semantic web services. A domain expert or an ontology engineer is a person who has knowledge of the domain related to the user’s requirements and masters expertise to model user requirements using a WSMO goal specification and WSMO ontology language, i.e., WSML. Finally, a user or a web service user uses GUI-based tools that the WSMO framework provides to instantiate the generic goal specification and define his or her own concrete goal requests. Thus, the users are only expected to specify their concrete requests with the help of tools, but they are not expected to specify or write the generic WSMO goal specification of their own.

In detail, the whole system proceeds the following way. The goal specification is designed and published on a public repository by an ontology engineer that represents the generic user requirements that can be fulfilled by a set of semantic web services. For example, a goal Book A Flight represents the generic user requirements for booking a flight and can be fulfilled by several web services. Such a goal specification is used by web service testers to generate a set of test cases, which is then executed on the selected web services and the test reports are recorded. A web service user finds the appropriate goal to meet his or her requirements and instantiates it using the GUI tools to define his concrete requests. The user submits the concrete request for achieving the goal. The semantic web service execution environment may select a particular web service that fulfills the user requirements. This selection can be based on the test result reports, if no other selection parameters are specified by the user. The user uses the selected web service to execute his/her request and gets the required response.

In the proposed solution, we can see that the SWS tester is now really testing the selected web service from the user perspective. On one hand, as shown in Fig. 2, all these user requests are instantiated from a generic goal specification and each instance represents a concrete user request. For instance, it is possible that, by mistake, a user can instantiate a request with an invalid flight date. On the other hand, as shown in Fig. 3, since the SWS tester uses the same goal specification to generate the test suite, he can perfectly imitate the users to simulate user requests, including those critical ones. Thus, if a web service passes the comprehensive enough test suite generated by the tester, we are confident that it will probably do a good job in the real world to handle the inputs from the users.

The rest of the paper is organized as follows. The problem background is introduced in Section 2. The model-based test case generation is presented in Section 3. The mutation-based test case evaluation is presented in Section 4. The case study of the Amazon E-Commerce Service is reported in Section 5. The related work is reviewed in Section 6. Finally, Section 7 concludes this paper with a discussion of future work.

Section snippets

WSMO framework

The WSMO framework is composed of four top level elements. An ontology is defined as a collection of terms, with their definitions, and relationships between these terms in a particular domain. The ontology is a basic building block of the WSMO framework and is used in other elements, such as, in web services, goals and mediators. An ontology can be described using a WSML variant, declared with the wsmVariant keyword and uses different name spaces defined using the namespace keyword. In the

Model checking based test case generation

In this section, we present a framework for generating test cases from goal specifications using model checking in B notation [17]. Fig. 4 shows the main steps of the proposed framework. First, the input to the framework, a goal specification (in WSML), is translated to a B specification, which is used to calculate the trap properties. Next, the trap properties, together with the B specification, are supplied to model checker, which then violates the trap properties and generates the

Mutation-based test case evaluation

After successfully generating the test cases, the next step is to measure their qualities, which essentially reflects how good the test cases are in terms of detecting potential errors in the program. Traditionally, there are two general approaches used to evaluate the quality of test cases. The first one is the coverage based approach that measures the coverage of different execution paths in the program by the test suite. The other is the mutation-based approach, which is based on fault

Case study of Amazon E-Commerce Service

We evaluated our approach on the well-known Amazon E-Commerce Service(ECS), a web service that exposes Amazon’s product data and E-Commerce functionality. It contains all the functionalities related to E-Commerce tasks, including searching catalog by different parameters, searching items, creating and managing shopping carts, looking up customers wish lists and registries and getting help for using an Amazon operation [1]. Fig. 7 shows the case study evaluation process. It starts with the

Testing of standard web services

Quite a lot of research has been conducted in the area of the standard web service testing, i.e., the testing of web services based on standard non-semantic specifications, e.g., WSDL, REST and BPEL, the research is still evolving. We classified the research into three sub categories, i.e., the initial work that extends WSDL, secondly the techniques that syntactically parse specifications and thirdly those which change the specifications.

Wei Tek Tsai is the pioneer of the research in web

Conclusions and future work

In this paper, we proposed a novel approach for testing SWS from the user perspective. Unlike the traditional approaches that test web services based on the service provider’s own specifications, we utilized the user requirement specification, namely goal specification in WSMO, to generate the test cases for testing of SWS. We covered most of the testing steps, i.e., the generation, execution and evaluation of test cases derived from goal specifications. To generate test cases we proposed the

References (24)

  • A. Idani et al.

    Object oriented concepts identification from formal b specifications

    Electron. Notes Theor. Comput. Sci.

    (2005)
  • Amazon Web Services, Amazon Associates Web Developer Guide API version 2009,...
  • P. Ammann et al.

    Coverage criteria for logical expressions

    Proceedings of the 14th International Symposium on Software Reliability Engineering. ISSRE 03

    (2003)
  • G. Dai et al.

    Contract-based testing for web services

    Proceedings of the 31st Annual International Computer Software and Applications Conference

    (2007)
  • J. Domingue et al.

    Web service modeling ontology (WSMO)-an ontology for semantic web services

    Proceedings of the W3C Workshop on Frameworks for Semantics in Web Services

    (2005)
  • DongW.

    Testing WSDL based web service automatically

    Proceedings of WRI World Congress on Software Engineering

    (2009)
  • HuangH. et al.

    Automated model checking and testing for composite web services

    Proceedings of Eighth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, 2005. ISORC 2005

    (2005)
  • M.S. Jokhio et al.

    Automated mutation-based test case evaluation for semantic web services

    Proceedings of the 2014 23rd Australian Software Engineering Conference

    (2014)
  • P.C. Jorgensen

    Software Testing: A Craftsman Approach

    (2002)
  • M. Leuschel et al.

    ProB: a model checker for B

    Proceedings of International Symposium of Formal Methods Europe FME 2003

    (2003)
  • C. Mao

    A specification-based testing framework for web service-based software

    Proceedings of IEEE International Conference on Granular Computing, 2009

    (2009)
  • MeiH. et al.

    A framework for testing web services and its supporting tool

    Proceedings of the IEEE International Workshop on Service-Oriented System Engineering, SOSE 2005

    (2005)
  • Cited by (0)

    View full text