Requirement-based Selection Model for the Expansion of Regression Test Selection

Main Article Content

Adtha Lawanna

Abstract

Issue of maintaining software is to consider which test cases should be kept for the next modification where the size of test suite gets bigger. This makes performance of software development pull out. The objective of proposing requirement-based test case selection model is to improve ability of regression test selection, in particular, to moderate the size of test suite of the modified program, which gets larger after modification regarding the need of specific requirements, including preparing higher ability of removing faults. It comprises five main algorithms, which are finding reused test case, classifying, revising, deleting, and selecting the appropriate test cases. This paper uses six programs run on different four comparative studies, which are selectall, random, and regression test selection. It gives smaller size than the traditional techniques about 49.78% in average. Besides, it offers percent fixing faults that is higher than select-all, random, and regression test selection algorithm as around 0.06–1.32%.

Article Details

How to Cite
Lawanna, A. (2017). Requirement-based Selection Model for the Expansion of Regression Test Selection. Applied Science and Engineering Progress, 10(3). Retrieved from https://ph02.tci-thaijo.org/index.php/ijast/article/view/165626
Section
Research Articles

References

[1] P. Bhatt, G. Shroff, and A. K. Misa, “Dynamics of software maintenance,” ACM SIGSOFT Software Engineering Notes, vol. 29, no. 5, pp. 1–5, 2004.

[2] D. I. K. Sjoberg, B. Anda, and A. Mockus, “Questioning software maintenance metrics: A comparative case study,” in Proceedings of the ACMIEEE International Symposium on Empirical Software Engineering and Measurement, 2012, pp. 107–110.

[3] O. Denninger, “Recommending relevant code artifacts for change requests using multiple predictors,” in Proceedings of the 3rd International Workshop on Recommendation Systems for Software Engineering, 2012, pp. 78–79.

[4] A. Lawanna, “Filtering test case selection for increasing the performance of regression testing,” KMUTNB Int J Appl Sci Technol, vol. 9, no. 1, pp. 19–25, 2016.

[5] A. Magalhaes, F. Barros, A. Mota, and E. Maia, “Automatic selection of test cases for regression testing,” in Proceedings of the 1st Brazilian Symposium on Systematic and Automated Software Testing, 2016, pp. 19–20.

[6] L. Mariani, O. Riganelli, M. Santoro, and M. Ali, “G-RankTest: Regression testing of controller applications,” in Proceedings of the 7th International Workshop on Automation of Software Test, 2012, pp. 131–137.

[7] L. Gong, D. Lo, L. Jiang, H. Zhang, “Diversity maximization speedup for fault localization,” in Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering, pp. 30–39, 2012.

[8] A. Lawanna, “Test case based selection for the process of software maintenance,” Silapakorn University of Science and Technology Journal, vol. 7, no. 2, pp. 36–45, 2013.

[9] A. Lawanna, “Technique for test case selection in software maintenance,” Walailak Journal of Science and Technology, vol. 11, no. 2, pp. 69–77, 2014.

[10] H. Do, S. G. Elbaum, and G. Rothermel, “Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact,” Empirical Software Engineering, vol. 10, no. 4, pp. 405–435, 2005.

[11] H. Do, G. Rothermel, “An empirical study of regression testing techniaues incorporating context and lifetime factors improved cost benefit models,” in Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 2006, pp. 141–151.

[12] G. Rothermel and M. J. Harrold, “Analyzing regression test selection techniques,” IEEE Transactions on Software. Engineering, vol. 22, no. 8, pp. 529–551, 1996.

[13] G. Rothermel and M. J. Harrold, “A safe, efficient regression test selection technique,” ACM Transactions on Software Engineering and Methodology, vol. 6, no. 2, pp. 173–210, 1997.

[14] M. Grindal, B. Lindstrom, J. Offutt, and S.F. Andler, “An evaluation of combination strategies for test case selection,” Empirical Software Engineering, vol. 11, no. 4, pp. 1–31, 1977.

[15] P. Hsia, X. Li, D. Kung, C-T. Hsu, L. Li, Y. Toyoshima, and C. Chen, “A technique for the selective revalidation of OO software,” Software Maintenance: Research and Practice, vol. 9, pp. 217–233, 1997.

[16] D. L. Bird and C. U. Munoz, “Automatic generation of random self-checking test cases,” IBM System Journal, vol. 22, no. 3, pp. 299–245, 1983.

[17] T. Y. Chen, F. C. Kue, R. G. Merkel, and T. H. Tse, “Adaptive random testing: The ART of test case diversity,” Journal of Systems and Software, vol. 83, no. 1, pp. 60–66, 2010.

[18] Z. Zhou, “Using coverage information guide test case selection in adaptive random testing,” in Proceedings of Computer Software and Application Conference Workshops, pp. 1–8, 2010.

[19] E. Engström, P. Runeson, and M. Skoglund, “A systematic review on regression test selection techniques,” Information and Software Technology, vol. 52, no. 1, pp. 14–30, 2010.

[20] A. Hooda and S. Panwar, “A roadmap for effective regression testing,” International Journal of Scientific and Engineering Research, vol. 7, no. 5, pp. 214–220, 2016.

[21] A. Ansari, A. Khan, A. Khan, and K. Mukadam, “Optimized regression test using test case prioritization,” in Proceedings of the 7th International Conference on Communication, Computing and Virtualization, pp. 152–160, 2016.