Investigating the Prioritization of Unit Testing Effort using Software Metrics

Fadel Toure, Mourad Badri, Luc Lamontagne

2017

Abstract

In object-oriented software, unit testing is a level of software testing where each individual class is tested by a dedicated unit test class. Unfortunately, due to time and resources constraints, this phase does not cover all classes. The testing efforts are often focused on particular classes. In this paper, we investigate an approach based on software information history to support the prioritization of classes to be tested. To achieve this goal, we first analyzed different attributes of ten open-source Java software systems for which JUnit test cases have been developed for several classes. We used the mean and the logistic regression analysis to characterize the classes for which JUnit test classes have been developed by testers. Second, we used two classifiers trained on metrics values and unit tests information collected from the selected systems. The classifiers provide, for each software, a set of classes on which unit testing efforts have to be focused. The obtained sets have been compared to the sets of classes for which JUnit test classes have been developed by testers. Results show that: (1) the metrics average values of tested classes are significantly different from the metrics average values of other classes, (2) there is a significant relationship between the fact that a JUnit test class has been developed for a class and its attributes, and (3) the sets of classes suggested by classifiers reflect the testers’ selection properly.

References

  1. Aggarwal K.K., Singh Y., Kaur A., and Malhotra R., 2009. Empirical Analysis for Investigating the Effect of Object-Oriented Metrics on Fault Proneness: A Replicated Case Study, Software Process Improvement and Practice, vol. 14, no. 1, pp. 39-62.
  2. Badri L., Badri M. and Toure F., 2010. Exploring Empirically the Relationship between Lack of Cohesion and Testability in Object-Oriented Systems, JSEA Eds., Advances in Software Engineering, Communications in Computer and Information Science, Vol. 117, Springer, Berlin.
  3. Badri M. and Toure F., 2011. Empirical analysis for investigating the effect of control flow dependencies on testability of classes, in Proceedings of the 23rd International Conference on Software Engineering and Knowledge Engineering (SEKE 7811).
  4. Badri M. and Toure F., 2012. Empirical analysis of object oriented design metrics for predicting unit testing effort of classes, Journal of Software Engineering and Applications (JSEA), Vol. 5 No. 7, pp.513-526.
  5. Basili V.R., Briand L.C. and Melo W.L., 1996. A Validation of Object-Oriented Design Metrics as Quality Indicators, IEEE Transactions on Software Engineering. vol. 22, no. 10, pp. 751-761.
  6. Boehm B. and Basili V. R., 2001. Software defect reduction top-10 list, Computer, vol. 34, no. 1, pp. 135-137.
  7. Bruntink M., and Deursen A.V., 2004. Predicting Class Testability using Object-Oriented Metrics, 4th Int. Workshop on Source Code Analysis and Manipulation (SCAM), IEEE.
  8. Bruntink M. and Van Deursen A., 2006. An Empirical Study into Class Testability, Journal of Systems and Software, Vol. 79, No. 9, pp. 1219-1232.
  9. Carlson R., Do H., and Denton A., 2011. A clustering approach to improving test case prioritization: An industrial case study, Software Maintenance, 27th IEEE International Conference, ICSM, pp. 382-391.
  10. Chidamber S.R. and Kemerer C.F., 1994. A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476-493.
  11. Dagpinar M., and Jahnke J., 2003. Predicting maintainability with object-oriented metrics - an empirical comparison, Proceedings of the 10th Working Conference on Reverse Engineering (WCRE), IEEE Computer Society, pp. 155-164.
  12. Elbaum S., Rothermel G., Kanduri S. and Malishevsky A.G., 2004. Selecting a cost-effective test case prioritization technique, Software Quality Control, 12(3):185-210.
  13. Gupta V., Aggarwal K.K. and Singh Y., 2005. A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability, Journal of Computer Science, Vol. 1, No. 2, pp. 276-282.
  14. Henderson-Sellers B., 1996. Object-Oriented Metrics Measures of Complexity, Prentice-Hall, Upper Saddle River.
  15. Hosmer D. and Lemeshow S., 2000. Applied Logistic Regression, Wiley-Interscience, 2nd edition.
  16. Kim J. and Porter A., 2002. A history-based test prioritization technique for regression testing in resource constrained environments, In Proceedings of International Conference on Software Engineering.
  17. Li W., and Henry S., 1993. Object-Oriented Metrics that Predict Maintainability Journal of Systems and Software, vol. 23 no. 2 pp. 111-122.
  18. Lin C.T., Chen C.D., Tsai C.S. and Kapfhammer G. M., 2013. History-based Test Case Prioritization with Software Version Awareness, 18th International Conference on Engineering of Complex Computer Systems.
  19. McCabe T. J., 1976. A Complexity Measure, IEEE Transactions on Software Engineering: 308-320.
  20. Mirarab S. and Tahvildari L., 2007. A prioritization approach for software test cases on Bayesian networks, In FASE, LNCS 4422-0276, pages 276- 290.
  21. Mockus A., Nagappan N. and Dinh-Trong T. T., 2009. Test coverage and post-verification defects: a multiple case study, in Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM), pp. 291- 301.
  22. Ray M. and Mohapatra D.P., 2012. Prioritizing Program elements: A pretesting effort to improve software quality, International Scholarly Research Network, ISRN Software Engineering.
  23. Rompaey B. V. and Demeyer S., 2009. Establishing traceability links between unit test cases and units under test, in Proceedings of the 13th European Conference on Software Maintenance and Reengineering (CSMR 7809), pp. 209-218.
  24. Rothermel G., Untch R.H., Chu C. and Harrold M.J., 1999. Test case prioritization: an empirical study, International Conference on Software Maintenance, Oxford, UK, pp. 179-188.
  25. Shatnawi R., 2010. A Quantitative Investigation of the Acceptable Risk Levels of Object-Oriented Metrics in Open-Source Systems, IEEE Transactions On Software Engineering, Vol. 36, No. 2.
  26. Shihaby E., Jiangy Z. M., Adamsy B., Ahmed E. Hassany A. and Bowermanx R., 2010. Prioritizing the Creation of Unit Tests in Legacy Software Systems, Softw. Pract. Exper., 00:1-22.
  27. Toure F., Badri M. and Lamontagne L., 2014a. Towards a metrics suite for JUnit Test Cases. In Proceedings of the 26th International Conference on Software Engineering and Knowledge Engineering (SEKE Vancouver, Canada. Knowledge Systems Institute Graduate School, USA pp 115-120.
  28. Toure F., Badri M. and Lamontagne L., 2014b. A metrics suite for JUnit test code: a multiple case study on open source software, Journal of Software Engineering Research and Development, Springer, 2:14.
  29. Walcott K.R., Soffa M.L., Kapfhammer G.M. and Roos R.S., 2006. Time aware test suite prioritization, Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2006). ACM Press, New York, 1-12.
  30. Wong W., Horgan J., London S., and Agrawal, H., 1997. A study of effective regression in practice, Proceedings of the 8th International Symposium on Software Reliability Engineering, November, p. 230- 238.
  31. Yu Y. T. and Lau M. F., 2012. Fault-based test suite prioritization for specification-based testing, Information and Software Technology Volume 54, Issue 2, Pages 179-202.
  32. Zhou Y. and Leung H., 2006. Empirical Analysis of Object-Oriented Design Metrics for Predicting High and Low Severity Faults, IEEE Transaction Software Engineering, vol. 32, no. 10, pp. 771-789.
  33. Zhou Y., and Leung H., 2007. Predicting object-oriented software maintainability using multivariate adaptive regression splines, Journal of Systems and Software, Volume 80, Issue 8, August 2007, Pages 1349-1361, ISSN 0164-1212.
Download


Paper Citation


in Harvard Style

Toure F., Badri M. and Lamontagne L. (2017). Investigating the Prioritization of Unit Testing Effort using Software Metrics . In Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE, ISBN 978-989-758-250-9, pages 69-80. DOI: 10.5220/0006319300690080


in Bibtex Style

@conference{enase17,
author={Fadel Toure and Mourad Badri and Luc Lamontagne},
title={Investigating the Prioritization of Unit Testing Effort using Software Metrics},
booktitle={Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,},
year={2017},
pages={69-80},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006319300690080},
isbn={978-989-758-250-9},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE,
TI - Investigating the Prioritization of Unit Testing Effort using Software Metrics
SN - 978-989-758-250-9
AU - Toure F.
AU - Badri M.
AU - Lamontagne L.
PY - 2017
SP - 69
EP - 80
DO - 10.5220/0006319300690080