A method for the automatic generation of test suites from object models
Introduction
Software systems are extremely complex; the amount of information contained in an implementation is hard to comprehend in its entirety. As we cannot test without first understanding what the implementation is supposed to do, we need a way to manage this complexity; one way of doing this is to create a suitable abstract model of the system.
The potential benefits of model-based testing are clear, but difficult to obtain: the manual extraction of behavioural information from an object-oriented model, and writing test suites to determine whether an implementation exhibits only acceptable behaviours, is a time-consuming and error-prone activity. Fortunately, this test generation process can be automated.
This paper shows how test suites can be generated from a precise, abstract model, written in the Unified Modeling Language (UML) [9], while addressing particular combinations of test directives: test purposes, test constraints, and coverage criteria; the generation process is based in part upon a formal semantics for a subset of UML.
This paper is intended as an introduction to this approach. It explains the underlying principles, as well as the functionality of the prototypical toolset. The mathematical details of the behavioural semantics, and the translation from UML to the language of the toolset have been omitted for reasons of space and readability: suitable references are provided.
Section snippets
Adequate models
The UML provides a set of notations designed to meet the needs of typical software modeling projects. For the purposes of this paper, we are interested in three of these notations: class diagrams, state diagrams, and object diagrams; using these, we can completely characterise the behaviour of a system at a particular level of abstraction: the class diagram identifies the entities in the system; the object diagram specifies an initial configuration; the state diagrams explain how these entities
The IF language
The Intermediate Format (IF) was developed as a machine-readable interchange language: a staging point in the translation of models between higher-level specification languages and a variety of analysis tools. In IF, a system is described as a collection of finite state machines, or processes, communicating with each other by sending signals along signalroutes. Each process is described as a list of states and transitions. The arrival of a signal at a process triggers a transition, leading to a
Test generation
The toolset that we are working with is being developed by the academic and industrial partners in an EU-funded research project called AGEDIS (for Automated Generation and Execution of test suites for DIstributed component-based Software).
Discussion
The methodology described in this paper has two features of particular interest: the use of precise UML models, with an associated formal, behavioural semantics; the way in which test directives, also defined as UML diagrams, can be used to guide the generation process. The former makes the methodology accessible to a wide range of potential users; the latter makes it possible to generate useful tests from a complex, realistic model.
The development of the methodology, and the associated
Acknowledgements
The authors gratefully acknowledge the contributions made by their partners in the AGEDIS project, and in particular the extensive feedback on language design and formalisation provided by Laurent Mounier (Verimag). They are grateful also to Jim Woodcock and Ian Craggs (IBM) for their role in initiating this research; and to IBM, for their support under the Faculty Partnership Program.
References (15)
- et al.
Modeling the dynamics of UML state machines
(2000) - et al.
A precise semantics of UML State Machines: making semantic variation points and ambiguities explicit
in ETAPS 2002
(2002) - et al.
Refinement and concurrency in UML
(2003) - et al.
Using state diagrams to describe concurrent behaviour
(2003) - et al.
On-the-fly verification techniques for the generation of test suites
(1996) Model Driven Architecture: Applying MDA to Enterprise Computing
(2003)- E. Gamma, K. Beck, Junit: a regression testing framework,...
Cited by (22)
Empirical studies omit reporting necessary details: A systematic literature review of reporting quality in model based testing
2018, Computer Standards and InterfacesAutomatic test case generation from UML communication diagrams
2007, Information and Software TechnologyCitation Excerpt :Further, we use exactly the same UML diagrams developed for analysis and design, without requiring any additional formalism or effort specifically made for testing purposes. Many reported methods require augmenting the UML specifications with specific annotations to facilitate the test derivation, or an additional formalism that the methods can process [15,31,8,10,16,20]. On the other hand, many of the UML related testing works reported, were not able to generate actual test cases automatically.
Combining Sequence Diagrams and OCL for Liveness
2005, Electronic Notes in Theoretical Computer ScienceEmbedded Software System Testing: Automatic Testing Solution Based on Formal Method
2023, Embedded Software System Testing: Automatic Testing Solution Based on Formal MethodTest Case Generation for Arduino Programming Instructions using Functional Block Diagrams
2022, Trends in SciencesGeneration of Test Cases from UML Diagrams - A Systematic Literature Review
2021, ACM International Conference Proceeding Series