1 Motivation

Apart from providing formal verification, model checking efficiently and automatically derives test sequences from transition system models. Automatic test generation exploits the capabilities of model checkers, generating counter-examples with properties that violate the model [3]. As demonstrated by Gadhari et al. [4], the model checking technique generates test cases from models more efficiently than random generation and guided simulation. Motivated by this study, we began developing SimAutoGen three years ago. We limit our scope to Simulink models because Simulink is the most popular graphical modeling language for embedded automotive software. Several model checking approaches for test case generation from MATLAB/Simulink models have been already proposed, including AutomotGen [4], SmartTestGen [9], and SAL (which integrate the sal–atg tool for automatic test generation) [10] and the V&V Diversity platform [8]. In [5], we compared the performances of SimAutoGen, sal–atg and the SLDV test case generator. Model checkers are recognized for their flexibility and ease of use [3]. However, we identified three main problems with model checkers:

  1. 1.

    Test case generation with model checkers is feasible only when the available model can be handled by the model checker.

  2. 2.

    Model checkers are severely limited by the state-space explosion problem.

  3. 3.

    The properties of model checkers are usually expressed in Linear Temporal Logic or Computational Tree Logic, which differ from the language of the model.

Our tool SimAutoGen corrects these problems in the context of test vector generation from Simulink models. First, SimAutoGen does not transform the Simulink model. Second, we implement a new slicing algorithm inspired by the method described in [7], which solves the state-space explosion problem in large-scale Simulink models. Third, the properties to be verified are expressed in the Simulink language, and specified according to the criterion of the structural coverage model.

2 Structural Model Coverage Criteria

The structural coverage metric can be utilized in two ways, as a test adequacy criterion that decides whether a given test set completely or adequately complies with that criterion, or as an explicit specification for test vector selection. In the second case, the structural coverage metric behaves as a test selection criterion (a generator for white-box tests), because the model and the code generated from it are structurally similar. Thus, we can expect certain interrelations between the attained model and the code coverage. Kirner [11] discussed the preservation of code coverage at the model level. In our work, the structural coverage metrics are employed as the test selection criterion. The test vectors generated from the Simulink models by our model checking technique must conform to the structural coverage criterion. To accomplish this objective, we specify the Simulink properties for three criteria of the control flow coverage (Condition, Decision, and MC/DC), and the criterion of boundary value analysis. These four criteria are briefly described below.

  1. 1.

    Condition coverage criterion: This criterion is determined by ensuring the coverage of the Boolean inputs to the logical Simulink blocks.

  2. 2.

    Branch/Decision coverage criterion: According to this criterion, a block with conditional behavior is covered provided that all conditional behavior has been exercised at least once. For this purpose, SimAutoGen supports the following blocks: Logical Operators, Switch, MultiportSwitch, Relational Operator, and Saturation.

  3. 3.

    MC/DC coverage criterion: Chilenski [13] investigated three categories of MC/DC: Unique Cause MC/DC, Unique Cause + Masking MC/DC, and Masking MC/DC. Based on [13], we employ masking MC/DC. In masking MC/DC, a basic condition is masked if varying its value cannot affect the outcome of a decision due to structure of the decision and the value of other conditions. Masking MC/DC for logical operator blocks is described in [14]. Besides the properties, each block needs an assumption to ensure generation of the required test vector. In SimAutoGen, the masking MC/DC coverage criterion is applied to the following blocks: Logical Operators, Switch, MultiportSwitch, Relational Operator, and Saturation.

  4. 4.

    Boundary value analysis: This criterion ensures data coverage of the numeric type inputs to the mathematical Simulink blocks (Sum, Product, Division, and Subtraction).

3 Software Description

We present SimAutoGen, a tool that automatically generates test vectors from MATLAB/Simulink models [2]. Our methodology is based on model checking [6]. The main highlights of the tool, which is designed for automotive controller testing, are listed below:

  1. 1.

    Determines structural coverage metrics at the model level corresponding to the coverage metrics at the code level.

  2. 2.

    Generates test inputs by model checking, thus obtaining the model coverage criteria.

  3. 3.

    Does not convert the Simulink model to an intermediate formal language.

  4. 4.

    Specifies the test objectives (properties) as Simulink properties.

  5. 5.

    Avoids the state-space explosion problem during model checking by enhancing an existing solution.

  6. 6.

    Improves the reliability of testing, thus reducing the test phase cost of large-scale Simulink models.

The current implementation of SimAutoGen uses the model checker Prover Plug-In [12] integrated into the Simulink Design Verifier tool (SLDV) [1]. SimAutoGen is implemented in Java (Eclipse Environment) and extracts the relevant information from the Simulink models by a MATLAB script. This information is then used for test generation.

4 Software Architecture

SimAutoGen is developed in the Eclipse and MATLAB environments. The portability of SimAutoGen is ensured by the Java script. A structural overview of SimAutoGen is presented in Fig. 1.

Fig. 1.
figure 1

SimAutoGen overview

User Interface. It is a Java Swing-based application that displays the inputs and outputs of SimAutoGen. The three inputs to SimAutoGen are (1) a Simulink model (a .mdl file), (2) a user-selected structural coverage criterion, and (3) a user-selected process. The three processes, Atomic testing, Unit testing, and Slicing, will be detailed in the appendix. The Atomic testing feature processes tiny Simulink models that require no slicing (i.e., single-output models). This feature is useful for a preliminary implementation testing. The Unit testing feature slices large Simulink models with two or more outputs, and is suitable for testing advanced implementations. The output of SimAutoGen is a set of test vectors or a set of slices. Slicing can be selected for purposes other than test vector generation.

Core Elements. SimAutoGen is a new approach called MB–ATG [5], whose structure is described in Fig. 2. MB–ATG is implemented in three steps. The first, second, and third steps handle large-scale Simulink models, automatic test vector generation from each slice (according to the structural coverage criterion), and integration of the test vectors generated from each slice, respectively. The second step uses the model checker Prover Plug-In and expresses the properties in the Simulink language. The property \(\psi \) and the assumption H as the model M are implemented with Simulink operators called Proof objective and Assumption, labeled P and A, respectively. Both operators are accessible through the SLDV library. In the third step, redundant test vectors are eliminated from the integration.

Fig. 2.
figure 2

Structure of MB–ATG

Fig. 3.
figure 3

Decision coverage for the Switch block

SimAutoGen implements two MB–ATG components: large-scale Simulink model slicing and test vector generation. Large-scale slicing is performed by a new slicing algorithm inspired by the static method described in [7], which constructs dependency graphs based on two dependence relations: Data Dependence and Control Dependence. The Simulink blocks Data-store/Data-read pairs and From/Goto pairs were not treated in the dependence analysis of [7] because they are not connected through explicit links; rather, they communicate remotely through implicit communication protocols (Data-store/Data-read pairs, for example). Our new algorithm models both types of links. The authors of [7] extracted the blocks corresponding to the specific slicing criterion. However, our objective is to slice the whole model into disjoint components (slices). To this end, we trialed two methods; forward slicing and backward slicing. The slicing criteria in forward slicing are the global inputs. This solution is problematic because most of the Simulink models contain Event input variables, which affect all blocks. Consequently, we adopted backward slicing, whose outputs are the slicing criteria. In particular, we compute the slices of the Simulink model by performing a backward reachability analysis and marking the relevant blocks for each output. We then remove the unmarked blocks and all empty subsystems from the model. A subsystem is a set of blocks that you replace with a single Subsystem block. The second MB–ATG component (test vector generation) has two elements: a model transformation protocol and test-vector integration. The model transformation protocol parses each slice and weaves the properties and assumptions according to the block type and the user-selected structural coverage criterion. Before the weaving of properties and assumptions, this protocol locates and calculates \(\psi \) and H insertion position. Next, it updates the location of the neighboring blocks. Finally, it weaves P and H over the Simulink model. The model transformation protocol is described in [5]. Figure 3 shows the coverage of the Switch block according to the model decision coverage, with the properties woven on it. The transformed slice is processed by the model checker Prover Plug-In. In this case, a counterexample (equivalent to a test vector) is generated. The test vectors generated and output from each slice are saved in an XL file. All of these test vectors are then integrated while eliminating the repetitive and useless elements in the saved XL file. For this purpose, we implement a new algorithm that compares different XL files.

5 Evaluation and Measures

5.1 Model Description

Our tool was evaluated on six automotive industrial models, classified as shown in Table 1. The FastCor and Detection models are large-scale models with 400–800 blocks. AirFlow and AirMPmp have two outputs and between 44 and 75 blocks. ThrAr and AirMnfld are smaller models with 40 blocks and a single output.

Table 1. Models description
Table 2. Slices description
Table 3. Measures related to the execution time of SimAutoGen

5.2 Output Description

Table 2 shows the slicing results of the four large-scale Simulink models described above. The two largest models, FastCor and Detection, are respectively partitioned into three and five slices, whereas both medium-sized models are divided into two slices. The model splitting decreases the average number of inputs, blocks, and subsystems per slice, thereby avoiding the state-space explosion. The number of implicit connections represents the number of hidden links between the blocks of a single slice.

Table 3 shows various measures related to the execution time in milliseconds of the large- and atomic-scale models. Here, WT, IT, and GT denote the execution time of weaving, integration, and generation of all slices, respectively. The variables TV and ITV denote the number of test vectors generated per slice and the number of integrated vectors in the entire model (after removing the redundant input values), respectively. For the slicing action, we determined the parallel slicing time (PST) and sequential slicing time (SST). A comparison of the execution times of the slicing algorithm using sequential and parallel methods shows the improvement because of the use of Parallel Computing Toolbox of MATLAB. Therefore, we have used this toolbox in weaving and test vector generation processes. GT presents the execution time of counterexample generation. It shows that the model checker prover Plug-In consumes a large part of the total execution time.