Evaluating the Accuracy of Machine Learning Algorithms on Detecting Code Smells for Different Developers

Mário Hozano, Nuno Antunes, Baldoino Fonseca, Evandro Costa

2017

Abstract

Code smells indicate poor implementation choices that may hinder the system maintenance. Their detection is important for the software quality improvement, but studies suggest that it should be tailored to the perception of each developer. Therefore, detection techniques must adapt their strategies to the developer’s perception. Machine Learning (ML) algorithms is a promising way to customize the smell detection, but there is a lack of studies on their accuracy in detecting smells for different developers. This paper evaluates the use of ML-algorithms on detecting code smells for different developers, considering their individual perception about code smells. We experimentally compared the accuracy of 6 algorithms in detecting 4 code smell types for 40 different developers. For this, we used a detailed dataset containing instances of 4 code smell types manually validated by 40 developers. The results show that ML-algorithms achieved low accuracies for the developers that participated of our study, showing that are very sensitive to the smell type and the developer. These algorithms are not able to learn with limited training set, an important limitation when dealing with diverse perceptions about code smells.

References

  1. Abbes, M., Khomh, F., Gueheneuc, Y.-G., and Antoniol, G. (2011). An empirical study of the impact of two antipatterns, blob and spaghetti code, on program comprehension. In Software maintenance and reengineering (CSMR), 2011 15th European conference on, pages 181-190. IEEE.
  2. Amorim, L., Costa, E., Antunes, N., Fonseca, B., and Ribeiro, M. (2015). Experience report: Evaluating the effectiveness of decision trees for detecting code smells. In Proceedings of the 2015 IEEE 26th International Symposium on Software Reliability Engineering (ISSRE), ISSRE 7815, pages 261-269, Washington, DC, USA. IEEE Computer Society.
  3. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
  4. Fontana, F. A., Ferme, V., Marino, A., Walter, B., and Martenka, P. (2013). Investigating the Impact of Code Smells on System's Quality: An Empirical Study on Systems of Different Application Domains. 2013 IEEE International Conference on Software Maintenance, pages 260-269.
  5. Fontana, F. A., Mäntylä, M. V., Zanoni, M., and Marino, A. (2015). Comparing and experimenting machine learning techniques for code smell detection. Empirical Software Engineering.
  6. Fowler, M. (1999). Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston, MA, USA.
  7. Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., and Witten, I. H. (2009). The WEKA data mining software: An update. ACM SIGKDD Explorations Newsletter, 11(1):10-18.
  8. Khomh, F., Di Penta, M., and Gueheneuc, Y.-G. (2009a). An Exploratory Study of the Impact of Code Smells on Software Change-proneness. 2009 16th Working Conference on Reverse Engineering, pages 75-84.
  9. Khomh, F., Penta, M. D., Guéhéneuc, Y.-G., and Antoniol, G. (2011a). An exploratory study of the impact of antipatterns on class change- and fault-proneness. Empirical Software Engineering, 17(3):243-275.
  10. Khomh, F., Vaucher, S., Guéhéneuc, Y. G., and Sahraoui, H. (2009b). A bayesian approach for the detection of code and design smells. In Quality Software, 2009. QSIC'09. 9th International Conference on, pages 305-314. IEEE.
  11. Khomh, F., Vaucher, S., Guéhéneuc, Y.-G., and Sahraoui, H. (2011b). Bdtex: A gqm-based bayesian approach for the detection of antipatterns. J. Syst. Softw., 84(4):559-572.
  12. Landis, J. R. and Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, pages 159-174.
  13. Maiga, A., Ali, N., Bhattacharya, N., Sabane, A., Gueheneuc, Y.-G., and Aimeur, E. (2012). SMURF: A SVM-based Incremental Anti-pattern Detection Approach. 2012 19th Working Conference on Reverse Engineering, pages 466-475.
  14. Mäntylä, M. V. and Lassenius, C. (2006). Subjective evaluation of software evolvability using code smells: An empirical study, volume 11. Springer.
  15. Mitchell, T. M. (1997). Machine learning. McGrawHill series in computer science. McGraw-Hill, Boston (Mass.), Burr Ridge (Ill.), Dubuque (Iowa).
  16. Moha, N., Gueheneuc, Y.-G., Duchien, L., and a. F. Le Meur (2010). DECOR: A Method for the Specification and Detection of Code and Design Smells. IEEE Transactions on Software Engineering, 36(1):20-36.
  17. Oizumi, W., Garcia, A., da Silva Sousa, L., Cafeo, B., and Zhao, Y. (2016). Code anomalies flock together: Exploring code anomaly agglomerations for locating design problems. In Proceedings of the 38th International Conference on Software Engineering, ICSE 7816, pages 440-451, New York, NY, USA. ACM.
  18. Rasool, G. and Arshad, Z. (2015). A review of code smell mining techniques. Journal of Software: Evolution and Process, pages n/a-n/a.
  19. Santos, J. A. M., de Mendonça, M. G., and Silva, C. V. A. (2013). An exploratory study to investigate the impact of conceptualization in god class detection. In Proceedings of the 17th International Conference on Evaluation and Assessment in Software Engineering, EASE 7813, pages 48-59, New York, NY, USA. ACM.
  20. Schumacher, J., Zazworka, N., Shull, F., Seaman, C., and Shaw, M. (2010). Building empirical support for automated code smell detection. Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement - ESEM 7810, page 1.
  21. Witten, I. H. and Frank, E. (2005). Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann.
  22. Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., and Wesslén, A. (2000). Experimentation in software engineering: an introduction. Kluwer Academic Publishers, Norwell, MA, USA.
Download


Paper Citation


in Harvard Style

Hozano M., Antunes N., Fonseca B. and Costa E. (2017). Evaluating the Accuracy of Machine Learning Algorithms on Detecting Code Smells for Different Developers . In Proceedings of the 19th International Conference on Enterprise Information Systems - Volume 2: ICEIS, ISBN 978-989-758-248-6, pages 474-482. DOI: 10.5220/0006338804740482


in Bibtex Style

@conference{iceis17,
author={Mário Hozano and Nuno Antunes and Baldoino Fonseca and Evandro Costa},
title={Evaluating the Accuracy of Machine Learning Algorithms on Detecting Code Smells for Different Developers},
booktitle={Proceedings of the 19th International Conference on Enterprise Information Systems - Volume 2: ICEIS,},
year={2017},
pages={474-482},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006338804740482},
isbn={978-989-758-248-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 19th International Conference on Enterprise Information Systems - Volume 2: ICEIS,
TI - Evaluating the Accuracy of Machine Learning Algorithms on Detecting Code Smells for Different Developers
SN - 978-989-758-248-6
AU - Hozano M.
AU - Antunes N.
AU - Fonseca B.
AU - Costa E.
PY - 2017
SP - 474
EP - 482
DO - 10.5220/0006338804740482