Print Email Facebook Twitter Automated test generation for microsoft DSL tools Title Automated test generation for microsoft DSL tools Author Pat-El, B.B. Contributor Vermolen, S. (mentor) Bockting, S. (mentor) Van der Holst, J. (mentor) Van Deursen, A. (mentor) Faculty Electrical Engineering, Mathematics and Computer Science Department Software Engineering Programme Computer Science Date 2010-01-26 Abstract There are noticeable issues related to the traditional method for software engineering, perhaps the most significant being the current cost and time to market of software systems. Microsoft's Software Factories (SF) confront these problems. A tool that supports creating SFs is the Microsoft Tools for Domain-Specific Languages (DSL Tools), which allows developers to define a modeling language, generate visual designers and transform models, that are described in custom modeling languages, to code. It does not provide any means for testing the transformation process from model to code, so that testing boils down to repeatedly executing the code generation with a variety of input. A proposed method for improving the testing process is to build a tool that provides support for testing the code generation process. In this report, we describe how we built a tool that automatically builds input testmodels for SFs, optimizes generated test sets based on predicted test quality, supplies tests to a SF and generates a log-file based on the results of the testing process. Then, several methods for qualifying the quality of generated tests are proposed, which are based on coverage criteria. After that, we illustrate the results of a case study where the effectiveness of the automated test generation approach is tested, as well as the performance of coverage criteria in predicting test set quality. Twelve errors were exposed in the SF under test, most of these being robustness errors which appear difficult to find using traditional testing approaches. In addition, testing the performance of coverage criteria indicated that there was a relation between metamodel coverage and the number of errors found and that, of all proposed coverage criteria, average metamodel coverage was the best predictor for test quality. Finally, we conclude that automated test generation is a promising approach for improving the testing process for SFs. To reference this document use: http://resolver.tudelft.nl/uuid:4f9e5af1-bbda-4685-9045-2014171fb928 Embargo date 2010-04-01 Part of collection Student theses Document type master thesis Rights (c) 2010 Pat-El, B.B. Files PDF Automated_Test_Generation ... _Tools.pdf 2.29 MB Close viewer /islandora/object/uuid:4f9e5af1-bbda-4685-9045-2014171fb928/datastream/OBJ/view