Test Data Generation from Business Rules

URL

Change Log

When What
December 14th, 2015 Donated by Jianfeng Chen

Reference

Studies who have been using the data (in any form) are required to include the following reference:

@inproceedings{Yandrapally:2015:AMG:2818754.2818763,
 author = {Yandrapally, Rahulkrishna and Sridhara, Giriprasad and Sinha, Saurabh},
 title = {Automated Modularization of GUI Test Cases},
 booktitle = {Proceedings of the 37th International Conference on Software Engineering - Volume 1},
 series = {ICSE '15},
 year = {2015},
 isbn = {978-1-4799-1934-5},
 location = {Florence, Italy},
 pages = {44--54},
 numpages = {11},
 url = {https://dl.acm.org/citation.cfm?id=2818754.2818763},
 acmid = {2818763},
 publisher = {IEEE Press},
 address = {Piscataway, NJ, USA},
}

About the Data

Overview of Data

The site includes data only for the two subjects: Ceu-pacific and JBilling. For both the subjects, the “.model” shows the model created from the business rules obtained from respective websites, and “_HighLevelTests.csv” shows the tests generated. Among csv files, we show tests generated by both BUSTER and Exhaust as well.

Paper Abstract

Test cases that drive an application under test via its graphical user interface (GUI) consist of sequences of steps that perform actions on, or verify the state of, the application user interface. Such tests can be hard to maintain, especially if they are not properly modularized—that is, common steps occur in many test cases, which can make test maintenance cumbersome and expensive. Performing modularization manually can take up considerable human effort. To address this, we present an automated approach for modularizing GUI test cases. Our approach consists of multiple phases. In the first phase, it analyzes individual test cases to partition test steps into candidate subroutines, based on how user-interface elements are accessed in the steps. This phase can analyze the test cases only or also leverage execution traces of the tests, which involves a cost-accuracy tradeoff. In the second phase, the technique compares candidate subroutines across test cases, and refines them to compute the final set of subroutines. In the last phase, it creates callable subroutines, with parameterized data and control flow, and refactors the original tests to call the subroutines with context-specific data and control parameters. Our empirical results, collected using open-source applications, illustrate the effectiveness of the approach.