Authors:
Diogo Oliveira Santos
1
;
Vinicius H. S. Durelli
2
;
Andre Takeshi Endo
3
and
Marcelo Medeiros Eler
1
Affiliations:
1
University of São Paulo (EACH-USP), São Paulo, SP, Brazil
;
2
Federal University of São João Del Rei, Minas Gerais, MG, Brazil
;
3
Federal University of Technology - Paraná, Paraná, PR, Brazil
Keyword(s):
Accessibility, Automated, Testing, Tool, Evaluation, Random, Mobile.
Abstract:
Mobile accessibility testing is the process of checking whether a mobile app can be perceived, understood, and operated by a wide range of users. Accessibility testing tools can support this activity by automatically generating user inputs to navigate through the app under evaluation and run accessibility checks in each new discovered screen. The algorithm that determines which user input will be generated to simulate the user interaction plays a pivotal role in such an approach. In the state of the art approaches, a Uniform Random algorithm is usually employed. In this paper, we compared the results of the default algorithm implemented by a state of the art tool with four different biased random strategies taking into account the number of activities executed, screen states traversed, and accessibility violations revealed. Our results show that the default algorithm had the worst performance while the algorithm biased towards different weights assigned to specific actions and widget
s had the best performance.
(More)