
the developed approach, which uses a service for tai-
loring QMs and exporting quality profiles for Sonar-
Qube, can be considered suitable.
Regarding RQ 2.2 where the suitability of the au-
tomatic rule association of rules to QMs from a soft-
ware quality expert’s perspective should be examined,
the questionnaire’s results and the verbal feedback
from the interviewees indicate that a fully automatic
rule association would not be acceptable. This is due
to the fact that some interviewees work in highly reg-
ulated environments where misclassifications must be
avoided at all costs. However, the interviewees agreed
that this ML service would be appropriate for generat-
ing an initial draft, which would then require explicit
review by a software quality engineer.
To summarise the overall results of the evaluation,
with some exceptions the participants of the survey
rated almost all use-cases of the QM tailoring and
quality profile export service as useful, as expected.
In addition, the participants showed great interest in
the concept of the dynamic aspects where dynamic
project-related data from SonarQube is used to tailor
QMs and generate issue-specific quality profiles for
SonarQube which are used for further analysis.
Special interest was shown regarding the ”fixing
factor”. It was suggested to use this metric to de-
velop a traffic light system in a QM viewer respec-
tively editor where those traffic lights indicate on
sub-hierarchies of the corresponding QM whether the
number of open issues is currently rising, declining or
steady.
For the automatic rule classification using the
adapted and fine-tuned ML model from the litera-
ture research and the experiments, the participants of
this evaluation where sceptical regarding the usage in
a fully-automated real world scenario due to the re-
stricted environment as a participant remarked. How-
ever, the participants overall agreed that it would be
beneficial to the software quality management pro-
cess, if the ML service is not used totally automatic
but rather in a semi-automatic way, i.e., the classifica-
tion results of this service are not used as a final clas-
sification. Instead, these results are used as a sugges-
tion for the software quality expert who then marks
the classification manually.
Future work based on both developed services
could include more practical evaluation and usage on
real world projects in the context of larger companies.
Especially for the ML service, the classification in a
real world setting needs to be evaluated further.
REFERENCES
Beltagy, I., Lo, K., and Cohan, A. (2019). SciBERT: A Pre-
trained Language Model for Scientific Text. In Pro-
ceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP), pages 3615–3620, Hong
Kong, China. Association for Computational Linguis-
tics.
Deissenboeck, F., Juergens, E., Lochmann, K., and Wagner,
S. (2009). Software quality models: Purposes, usage
scenarios and requirements. In 2009 ICSE Workshop
on Software Quality, pages 9–14.
Garvin, D. A. (1987). Competing on the Eight Dimensions
of Quality. Harvard Business Review. Section: Con-
sumer behavior.
Knublauch, H., Allemang, D., and Steyskal, S. (2017).
SHACL Advanced Features.
Knublauch, H. and Kontokostas, D. (2017). Shapes Con-
straint Language (SHACL).
McCall, J. A., Richards, P. K., and Walters, G. F. (1977).
Factors in software quality. volume i. concepts and
definitions of software quality. Technical report,
GENERAL ELECTRIC CO SUNNYVALE CA.
Pressman, R. S. (2010). Software engineering: a practi-
tioner’s approach. McGraw-Hill Higher Education,
New York, 7th ed edition.
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002).
Experimental and quasi-experimental designs for gen-
eralized causal inference. Experimental and quasi-
experimental designs for generalized causal infer-
ence., pages xxi, 623–xxi, 623. Place: Boston, MA,
US Publisher: Houghton, Mifflin and Company.
Wagner, S., Goeb, A., Heinemann, L., Kl
¨
as, M., Lampa-
sona, C., Lochmann, K., Mayr, A., Pl
¨
osch, R., Seidl,
A., Streit, J., and Trendowicz, A. (2015). Oper-
ationalised product quality models and assessment:
The quamoco approach. Information and Software
Technology, 62:101–123.
Wagner, S., Lochmann, K., Heinemann, L., Kl
¨
as, M., Tren-
dowicz, A., Pl
¨
osch, R., Seidi, A., Goeb, A., and Streit,
J. (2012a). The quamoco product quality modelling
and assessment approach. In 2012 34th International
Conference on Software Engineering (ICSE), pages
1133–1142.
Wagner, S., Lochmann, K., Winter, S., Deissenb
¨
ock, E. J.,
Herrmansd
¨
orfer, M., Heinemann, L., Kl
¨
as, M., Tren-
dowicz, A., Heidrich, J., Pl
¨
osch, R., G
¨
ob, A., K
¨
orner,
C., Schoder, K., Streit, J., and Schubert, C. (2012b).
The quamoco quality meta-model. Report.
Wehrmann, J., Cerri, R., and Barros, R. (2018). Hierarchi-
cal Multi-Label Classification Networks. In Dy, J. and
Krause, A., editors, Proceedings of the 35th Interna-
tional Conference on Machine Learning, volume 80
of Proceedings of Machine Learning Research, pages
5075–5084. PMLR.
ICSOFT 2025 - 20th International Conference on Software Technologies
232