A Human-centred Framework for Combinatorial Test Design
Maria Spichkova
1
and Anna Zamansky
2
1
School of Science, RMIT University, 414-418 Swanston Street, 3001, Melbourne, Australia
2
Information Systems Department, University of Haifa, Carmel Mountain, 31905, Haifa, Israel
Keywords:
Software Quality, Testing, Formal Methods, Combinatorial Test Design.
Abstract:
This paper presents AHR, a formal framework for combinatorial test design that is Agile, Human-centred and
Refinement-oriented. The framework (i) allows us to reuse test plans developed for an abstract level at more
concrete levels; (ii) has human-centric interface providing queries and alerts whenever the specified test plan
is incomplete or invalid; (iii) involves analysis of the testing constraints within combinatorial testing.
1 INTRODUCTION
Combinatorial Test Design (CTD) is an eective
methodology for test design of complex software sys-
tems. In CTD systems are modelled via a set of pa-
rameters, their respective values and restrictions on
the value combinations, cf. (Nie and Leung, 2011;
Zhang et al., 2014). The main challenge of CTD is to
optimise the number of test cases, while ensuring the
coverage of given conditions. One of the most stan-
dard coverage requirements is pairwise testing (Nie
and Leung, 2011), where every (executable) pair of
possible values of system parameters is considered.
Experimental work shows that using tests sets with
exhaustive covering of a small number of parame-
ters (such as pairwise testing) can typically detect
more than 50-75% of the bugs in a program, cf. (Tai
and Lei, 2002; Kuhn et al., 2004). This testing ap-
proach can be applied at dierent phases and scopes
of testing, including end-to-end and system-level test-
ing and feature-, service- and application program
interface-level testing.
The CTD approach is model-based, i.e., test plans
are derived (manually or automatically) on the basis
from a model of the system under test (SUT) and its
environment. Therefore, while using this approach,
a considerable time have to be spend on generating
the infrastructure for testing (including the model of
SUT) instead of hand-crafting individual tests. This
implies that only behaviour encoded in the model can
be tested.
Moreover, in many cases dierent behaviours
need to be tested at dierent stages of the develop-
ment cycle. This leads to the need for handling multi-
ple abstraction levels and a systematic way of bridg-
ing between them. However, to provide an adequate
model with an sucient abstraction remains a strictly
human activity, which heavily relies on the human
factor. One barrier in the adoption of MBT in in-
dustry is the steep learning curve for modelling no-
tations, cf. (Grieskamp, 2006). Another barrier is
the lack of state-of-the-art authoring environments.
In this work, we aim is to provide the corresponding
semi-automatic support for the tester and help min-
imise the number of human errors as well as their im-
pact on the system-under-test.
We propose AHR, a formal framework for the
construction of combinatorial models within multi-
ple levels of abstraction. The main idea is defining
explicit refinement relations between elements of the
model at dierent abstraction levels. This core fea-
tures of our framework are (i) the reuse of test plans
developed for an abstract level at more concrete lev-
els; (ii) human-centric interface providing queries and
alerts to testers to help them analyse whenever the
specified test plans, model and/or constraints are in-
complete or invalid. One of the AHR goals is to
provide a sucient support to the tester by semi-
automatic analysis of the model and the test plans.
Outline: The rest of the paper is organised as fol-
lows. In Section 2 we discuss the related work and the
corresponding motivation for the AHR development.
Section 3 presents the background on CTD. Section 4
provides the formal definitions that build the core of
AHR to support CTD within multiple abstraction lev-
els. In Section 5 we discuss a use case for the frame-
work application. In Section 6 we summarise the pa-
per and propose directions for future research.
228
Spichkova, M. and Zamansky, A.
A Human-centred Framework for Combinator ial Test Design.
In Proceedings of the 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering (ENASE 2016), pages 228-233
ISBN: 978-989-758-189-2
Copyright
c
2016 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
2 RELATED WORK
The advantage of MBT is that the testers can con-
centrate on system model and constraints instead of
the manual specification of individual tests. There
are many approaches on model-based testing, e.g.,
(Dalal et al., 1999; Grieskamp, 2006). Utting et al.
presented a taxonomy of MBT approaches in (Ut-
ting et al., 2012). There are also many approaches
on CTD, cf. (Zhang et al., 2014; Segall et al., 2012;
Farchi et al., 2014; Farchi et al., 2013; Kuhn et al.,
2011). However, most of them focus on the ques-
tion how to generate test cases from a model in the
most ecient way also achieving full coverage of
the required system properties by the generated test
cases. In our approach, we combine the ideas of CTD
with the idea of a step-wise refinement of the system
trough the development process, also following agile
modelling practices and guidelines (Hellmann et al.,
2012; Talby et al., 2006). Agile software development
process focuses on facilitating early and fast produc-
tion of working code (Turk et al., 2005; Rumpe, 2006;
Hazzan and Dubinsky, 2014) by supporting iterative,
incremental development, where with each iteration
we refine the system step by step.
As pointed out in (Pretschner, 2005), model-based
testing makes sense only if the model is more ab-
stract than SUT. Testing methodologies for complex
systems often integrate dierent abstraction levels of
the system representation (Broy, 2005; Spichkova,
2008). Thus, abstraction plays a key role in the pro-
cess of system modelling. An important domain in
which modelling with dierent levels of abstraction
is particularly beneficiary is cyber-physical systems
(CPSs). Several works proposed to use a platform-
independent architectural design in the early stages
of system development, while pushing hardware-
and software-dependent design as late as possible
(Sapienza et al., 2012; Spichkova and Campetelli,
2012; Blech et al., 2014). In our previous work
(Spichkova et al., 2015a) we suggested to use three
main meta-levels of abstraction for the CPS develop-
ment: abstract, virtual, and cyber-physical. The AHR
framework can be applied at any of these meta-levels.
In (Segall and Tzoref-Brill, 2012) a tool for sup-
porting interactive refinement of combinatorial test
plans by the tester was presented. This tool is meant
for manual modifications of existing test plans, is
align with the idea of Human-Centred Agile Test De-
sign (Zamansky and Farchi, 2015; Spichkova et al.,
2015b) where it is explicitly acknowledged that the
tester’s activity is not error-proof. This tool be a good
support for the tester, but it does not cover the follow-
ing point that we consider as crucial for development
of complex systems: refinement-based development,
where the tester is working at multiple abstraction
levels. We aim to cover this point in the proposed
AHR framework: If we trace the refinement relations
not only between the properties but also between test
plans, this might also help to correct possible mistakes
more eciently, as well as provide additional support
if the system model is modified.
3 CTD: FORMAL BACKGROUND
In CTD a system is modelled using a finite set of sys-
tem parameters A = {A
1
, . . . , A
n
}. To each of the pa-
rameters is associated a set of corresponding values
V = {V(A
1
), . . . , V(A
n
)}.
In what follows we use the notion of interactions
between the dierent values of the parameters and the
notion of test coverage:
Definition 1. An interaction for a set of system pa-
rameters A is an element of the form I
S
n
1
V(A
i
),
where at most one value of each parameter A
i
may
appear.
Definition 2. A test (or scenario) is an interaction of
size n, where n is the number of system parameters.
Definition 3. A set of tests T covers a set of interac-
tions C (denoted by T C) if for every c C there is
some t T , such that c t.
Definition 4. A combinatorial model E of a system
with the corresponding set of parameters A is a set
of tests, which defines all tests over A that are exe-
cutable in the system.
Definition 5. A test plan is a triple Plan = (E, C, T ),
where E is a combinatorial model, C is a set of inter-
actions called coverage requirements, T is a set tests,
and the relation T C holds.
In the above terms, a pairwise test plan can be
specified as any pair of the form
Plan = (E, C
pair
(E), T )
where C
pair
is the set of all interactions of size 2
which can be extended to scenarios from E.
Example 1 . For a running example scenario, let us
consider a cyber-physical system with two robots R
1
and R
2
that are interacting with each other. At some
level of abstraction (let us call it Level
1
), a robot can
be modelled by two parameters, GM and P. Thus,
A = {GM
1
, GM
2
, P
1
, P
2
}
The system parameters GM
1
and GM
2
specify the
gripper modes (which can be either closed to hold an
A Human-centred Framework for Combinatorial Test Design
229
object or open) of robots R
1
and R
2
respectively. Let
us consider that at this level of abstraction the gripper
have only two modes:
V(GM
1
) = V(GM
2
) = {open, closed}
P
1
and P
2
represent the robots’ positions. We as-
sume at this level of abstraction that the grippers of
each robot have only three possible positions:
V(P
1
) = V(P
2
) = {pos
1
, pos
2
, pos
3
}
In what follows let us assume pairwise cover-
age requirements. We now specify a meta-operation
Give(A, B) to model the scenario when the robot A
hands an object to the robot B. Give(A, B) can only be
performed when the grippers of both robots are in the
same position, the gripper of A is closed and the grip-
per of B is open (where A, B {R
1
, R
2
} and A , B).
Thus, the operation Give(R
1
, R
2
) can be captured on
Level
1
in the following constraint model M
1
Give(R
1
,R
2
)
:
P
1
= P
2
GM
1
= closed GM
2
= open (1)
Without any constraints, we would require 36 tests
to cover all possible combinations of the values, but
considering the full coverage of the M
1
Give(R
1
,R
2
)
, we
require three tests only, cf. Table 1.
At the next level of abstraction, Level
2
, we might
refine both A and V to obtain a more realistic model
of the system. In the next section, we introduce the
notion of parameter and value refinements, which pro-
vides an explicit specification of the relations between
abstraction levels to allow traceability of the model
modification and the corresponding test sets.
Table 1: Test set providing pairwise coverage for
Give(R
1
, R
2
) on Level
1
.
testID P
1
P
2
GM
1
GM
2
test
1
pos
1
pos
1
closed open
test
2
pos
2
pos
2
closed open
test
3
pos
3
pos
3
closed open
4 REFINEMENT-BASED
DEVELOPMENT
Our framework is based on the idea of refinement:
a more concrete model can be substituted for an ab-
stract one as long as its behaviour is consistent with
that defined in the abstract model.
Definition 6. Let us consider two sets of system pa-
rameters A = {A
1
, . . . , A
n
} and B = {B
1
, . . . , B
k
}, with
k n. We define a parameter refinement from A to
B (also denoted by A B) as a function R that
maps each parameter A
i
to a set of parameters from
B, so that for two distinct parameters A
i
and A
j
,
1 i, j n, i , j, the sets R(A
i
) and R(A
j
) are dis-
joint.
Definition 7. For a parameter refinement R :
A B, a value refinement V
R
: V(A) V(B) maps
each value v V(A
i
) to the corresponding set of val-
ues V
R
(v), where
V
R
(v)
[
B∈R(A
i
)
V(B)
such that if B
j
R(A
i
), then for every v V(A
i
),
V(B
j
) V
R
(v) , .
The above definitions do not exclude the case
where both R and V
R
are singleton functions, i.e.
functions that convert each element a to a singleton
{a}. For this reason we have to introduce the notion of
concretisation.
Definition 8. If there exist parameter refinement R
and value refinement V
R
from a set of system parame-
ters A to a set of system parameters B (where at least
one of the the functions R and V
R
is not a singleton
function), we say that B is a concretisation (strict re-
finement) of A with respect to R and V. We denote
this by A V B.
Example 2 . Let us continue with the running example
of two interacting robots. At Level
1
, we have the set
of system parameters A
Level
1
= {P
1
, P
2
, GM
1
, GM
2
}.
At Level
2
, we refine the V(GM
1
) and V(GM
2
) to
have an additional the gripper mode mid, represent-
ing an intermediate position between open and closed
(i.e., the position when the grippers are opening or
closing, but not yet completely open or closed). We
do not need to change the parameters GM
1
and GM
2
,
but we have to extend the sets V(GM
1
) and V(GM
2
).
We also refine the abstract positions to their two-
dimensional coordinates: for i {1, 2}, P
i
is refined
to the tuple of two new parameters X
i
and Y
i
, and the
elements of V(P
i
) are mapped to the tuples of the cor-
responding coordinates. Thus, at Level
2
we have
A
Level
2
= {X
1
, Y
1
, X
2
, Y
2
, GM
1
, GM
2
}
V(X
1
) = V(X
2
) = {x
1
, x
2
}
V(Y
1
) = V(Y
2
) = {y
1
, y
2
, y
3
}
V(GM
1
) = V(GM
2
) = {open, closed, mid}
To represent the concretisation from Level
1
to
Level
2
, we specify the following relations for i {1, 2}
(cf. also Figures 1 and 2):
(1) Parameter refinement GM
Level
1
i
GM
Level
2
i
is a
singleton function. The corresponding value re-
finements are
ENASE 2016 - 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering
230
Figure 1: Parameter Refinement.
Figure 2: Value Refinement.
V
R
(open) = {open} and
V
R
(closed) = {closed},
where mid V(GM
Level
2
i
) does not have any cor-
responding element on Level
1
.
(2) Parameter refinement P
i
(X
i
, Y
i
) maps an ab-
stract position to a tuple of two-dimensional co-
ordinates, where the corresponding value refine-
ments are
V
R
(pos
1
) = {(x
1
, y
1
), (x
2
, y
1
)},
V
R
(pos
2
) = {(x
1
, y
2
), (x
2
, y
2
)},
V
R
(pos
3
) = {(x
1
, y
3
), (x
2
, y
3
)}.
Definition 9. A model refinement M
R
is a mapping
from the elements (conjuncts) of the constraint model
M
i
specified on the abstraction level i over the set of
parameters A to the the constraint model M
i+1
, spec-
ified on the next abstraction level over the set of pa-
rameters B, where A V B.
Definition 10. A Test refinement T
R
is a mapping
from the set of tests over A to the set of tests over
B, where A V B and
R : A B is a parameter refinement with the corre-
sponding value refinement V
R
: V(A) V(B).
The above provides a theoretical basis for tester sup-
port: given a system model based a set of parame-
ters A, the tester can specify explicit parameter and
value refinements, which in its turn induces a system
model for B and the refinement relations between sets
of tests on dierent abstract levels.
5 USE CASE FOR TESTER
SUPPORT
Suppose the modeller already has constructed a model
at Level
1
, using the parameters from our running ex-
ample on the Give meta-operation and providing the
constraint model
GM
1
= closed GM
2
= open (2)
where the information on the position is erroneously
omitted, because of a human error. If we generate
tests automatically, we obtain 9 tests to cover the
model, cf. Table 2. Let us consider that the tester de-
cided to limit the test set to have two tests only, e.g.,
{P
1
: pos
1
, P
2
: pos
1
, GM
1
: closed, GM
2
: open} and
{P
1
: pos
2
, P
2
: pos
2
, GM
1
: closed, GM
2
: open}.
The proposed framework would analyse these tests to
come up with the corresponding logical constraint:
GM
1
= closed GM
2
= open
(P
1
= P
2
= pos
1
P
1
= P
2
= pos
2
)
(3)
Table 2: Test set providing pairwise coverage for
Give(R
1
, R
2
) under constraint (2).
testID P
1
P
2
GM
1
GM
2
test
1
pos
1
pos
1
closed open
test
2
pos
1
pos
2
closed open
test
3
pos
1
pos
3
closed open
test
4
pos
2
pos
1
closed open
test
5
pos
2
pos
2
closed open
test
6
pos
2
pos
3
closed open
test
7
pos
3
pos
1
closed open
test
8
pos
3
pos
2
closed open
test
9
pos
3
pos
3
closed open
The AHR framework checks whether the coverage is
achieved by the above two tests, and provide the cor-
responding alert to the tester along with the message
that the constraint models (2) and (3) are semantically
unequal, (3) is a stronger constraint than (2). Let us
consider that the tester changes the constraint model
to (3) and select an additional test {P
1
: pos
3
, P
2
:
pos
3
, GM
1
: closed, GM
2
: open}.
A Human-centred Framework for Combinatorial Test Design
231
Next, the system model is refined as presented in
Example 2. Based on the specification of the system
parameters concretisation, the framework provides
the following suggestion for the refinement of the
constraint model M
Give(R
1
,R
2
)
. To increase the read-
ability and the traceability of the refinement steps, we
the suggestion is provided in two forms: as the con-
structed constraint model, cf. (4) and as a mapping
from the models on the previous and the current ab-
straction levels, cf. Table 3.
X
1
= X
2
Y
1
= Y
2
GM
1
= closed GM
2
= open
(4)
Table 3: Model refinement for Give(R
1
, R
2
).
Level
1
Level
2
P
1
= P
2
X
1
= X
2
Y
1
= Y
2
GM
1
= closed GM
1
= closed
GM
2
= open GM
2
= open
Depending on the semantics we give to the spatial
constrains in our model, we accept this suggestion or
adapt it. If we assume that the robot R
1
can give an
object to the robot R
2
when their grippers have the
same abstract coordinates, we accept this suggestion
and the framework proceeds with the refinement of
the tests. However, we might also assume at Level
2
that the robots’ grippers cannot have the same coordi-
nates (except a collision situation), and that the robot
R
1
can give an object to the robot R
2
when their grip-
pers are at the same level, but their x-coordinates have
to be dierent. In this case the constraint model has to
be specified on Level
2
as presented by (5) and Table 4.
X
1
, X
2
Y
1
= Y
2
GM
1
= closed GM
2
= open
(5)
Table 4: Corrected model refinement for Give(R
1
, R
2
).
Level
1
Level
2
P
1
= P
2
X
1
, X
2
Y
1
= Y
2
GM
1
= closed GM
1
= closed
GM
2
= open GM
2
= open
For the corrected model (5), AHR generates 6 tests to
achieve the coverage (cf. Table 5), and suggest the
following mapping between sets of tests:
test
Level
1
1
{test
Level
2
1
, test
Level
2
2
}
test
Level
1
2
{test
Level
2
3
, test
Level
2
4
}
test
Level
1
3
{test
Level
2
5
, test
Level
2
6
}
Traceability not only between the modification in the
system parameters but also between constraint models
and between test plans, helps to correct possible mis-
takes more eciently, as well as provides additional
Table 5: Test set providing coverage for Give(R
1
, R
2
) on
Level
2
under the constraint (5).
testID X
1
X
2
Y
1
Y
2
GM
1
GM
2
test
1
x
1
x
2
y
1
y
1
closed open
test
2
x
2
x
1
y
1
y
1
closed open
test
3
x
1
x
2
y
2
y
2
closed open
test
4
x
2
x
1
y
2
y
2
closed open
test
5
x
1
x
2
y
3
y
3
closed open
test
6
x
2
x
1
y
3
y
3
closed open
support if the system model is modified. For exam-
ple, if on some stage a new constraint is identified that
the meta-operation Give(R
1
, R
2
) is not possible when
the robots’ grippers are in the position pos
2
, the re-
quired changes in the models and the corresponding
test plans for all concretisations of the model can be
easily identified. Moreover, the AHR framework also
allows analysis of several branches of the refinement.
6 CONCLUSIONS
This paper presents our ongoing work on human-
centred testing. We propose a formal framework
for combinatorial test design that is Agile, Human-
centred and Refinement-oriented.
1
The framework
allows us to reuse test plans developed for an ab-
stract level at more concrete levels;
has human-centric interface providing queries and
alerts whenever the specified test plan is incom-
plete or invalid;
involves analysis of the testing constraints.
We integrate the ideas of refinement-based develop-
ment and the agile CTD, aim at increasing of the read-
ability and understandability of tests, to conform with
the ideas of human-oriented software development,
cf. (Spichkova et al., 2013; Spichkova, 2013).
A further future work direction is an implementa-
tion of a tool prototype for the proposed framework.
To this end we plan to connect the prototype with
the environment of IBM Functional Coverage Uni-
fied Solution, cf. (Segall and Tzoref-Brill, 2012; Wo-
jciak and Tzoref-Brill, 2014), which is a tool for test-
oriented system modelling, focused on model based
test planning and functional coverage analysis.
1
The second author was supported by The Israel Science
Foundation under grant agreement no. 817/15.
ENASE 2016 - 11th International Conference on Evaluation of Novel Software Approaches to Software Engineering
232
REFERENCES
Blech, J. O., Spichkova, M., Peake, I., and Schmidt, H.
(2014). Cyber-virtual systems: Simulation, validation
& visualization. In Proc. of the 9th International Con-
ference on Evaluation of Novel Approaches to Soft-
ware Engineering (ENASE 2014).
Broy, M. (2005). Service-oriented Systems Engineering:
Specification and design of services and layered ar-
chitectures. The JANUS Approach. Engineering The-
ories of Software Intensive Systems, pages 47–81.
Dalal, S. R., Jain, A., Karunanithi, N., Leaton, J., Lott,
C. M., Patton, G. C., and Horowitz, B. M. (1999).
Model-based testing in practice. In Proc. of the 21st
International Conference on Software Engineering,
pages 285–294. ACM.
Farchi, E., Segall, I., and Tzoref-Brill, R. (2013). Using
projections to debug large combinatorial models. In
Proc. of the International Conference o Software Test-
ing, Verification and Validation Workshops (ICSTW),
pages 311–320. IEEE.
Farchi, E., Segall, I., Tzoref-Brill, R., and Zlotnick, A.
(2014). Combinatorial testing with order require-
ments. In Proc. of the International Conference on
Software Testing, Verification and Validation Work-
shops (ICSTW), pages 118–127. IEEE.
Grieskamp, W. (2006). Multi-paradigmatic model-based
testing. In Formal Approaches to Software Testing and
Runtime Verification, pages 1–19. Springer.
Hazzan, O. and Dubinsky, Y. (2014). The agile manifesto.
In Agile Anywhere, pages 9–14. Springer International
Publishing.
Hellmann, T. D., Sharma, A., Ferreira, J., and Maurer,
F. (2012). Agile testing: Past, present, and future–
charting a systematic map of testing in agile software
development. In Proc. of the Agile Conference (AG-
ILE), 2012, pages 55–63. IEEE.
Kuhn, D. R., Wallace, D. R., and Gallo, Jr., A. M. (2004).
Software fault interactions and implications for soft-
ware testing. IEEE Transactions on Software Engi-
neering, 30(6):418–421.
Kuhn, R., Kacker, R., Lei, Y., and Hunter, J. (2011). Combi-
natorial software testing. IEEE Computer, 42(8):94–
96.
Nie, C. and Leung, H. (2011). A survey of combinatorial
testing. ACM Comput. Surv., 43(2):11:1–11:29.
Pretschner, A. (2005). Model-based testing in practice. In
FM 2005: Formal Methods, pages 537–541. Springer.
Rumpe, B. (2006). Agile test-based modeling. In Proc. of
the 2006 International Conference on Software Engi-
neering Research & Practice (SERP). CSREA Press.
Sapienza, G., Crnkovic, I., and Seceleanu, T. (2012). To-
wards a methodology for hardware and software de-
sign separation in embedded systems. In Proc. of the
ICSEA, pages 557–562. IARIA.
Segall, I. and Tzoref-Brill, R. (2012). Interactive refinement
of combinatorial test plans. In Proc. of the 34th Inter-
national Conference on Software Engineering, pages
1371–1374. IEEE Press.
Segall, I., Tzoref-Brill, R., and Zlotnick, A. (2012). Com-
mon patterns in combinatorial models. In Proc. of the
International Conference on Software Testing, Verifi-
cation and Validation (ICST), pages 624–629. IEEE.
Spichkova, M. (2008). Refinement-based verification of in-
teractive real-time systems. Electronic Notes in Theo-
retical Computer Science, 214:131–157.
Spichkova, M. (2013). Design of formal languages and
interfaces: formal does not mean unreadable. In
Emerging Research and Trends in Interactivity and the
Human-Computer Interface. IGI Global.
Spichkova, M. and Campetelli, A. (2012). Towards system
development methodologies: From software to cyber-
physical domain. In Proc. of the International Work-
shop on Formal Techniques for Safety-Critical Sys-
tems.
Spichkova, M., Liu, H., and Schmidt, H. (2015a). Towards
quality-oriented architecture: Integration in a global
context. In Proc. of the European Conference on Soft-
ware Architecture Workshops, page 64. ACM.
Spichkova, M., Zamansky, A., and Farchi, E. (2015b). To-
wards a human-centred approach in modelling and
testing of cyber-physical systems. In Proc. of the
International Workshop on Automated Testing for
Cyber-Physical Systems in the Cloud.
Spichkova, M., Zhu, X., and Mou, D. (2013). Do we really
need to write documentation for a system? In Proc.
of the International Conference on Model-Driven
Engineering and Software Development (MODEL-
SWARD’13).
Tai, K.-C. and Lei, Y. (2002). A test generation strategy
for pairwise testing. IEEE Transactions on Software
Engineering, 28(1):109–111.
Talby, D., Keren, A., Hazzan, O., and Dubinsky, Y. (2006).
Agile software testing in a large-scale project. IEEE
Software, 23(4):30–37.
Turk, D., France, R. B., and Rumpe, B. (2005). Assump-
tions underlying agile software development pro-
cesses. Journal of Database Management, 16:62–87.
Utting, M., Pretschner, A., and Legeard, B. (2012). A tax-
onomy of model-based testing approaches. Software
Testing, Verification and Reliability, 22(5):297–312.
Wojciak, P. and Tzoref-Brill, R. (2014). System level com-
binatorial testing in practice the concurrent mainte-
nance case study. In Proc. of the International Confer-
ence on Software Testing, Verification, and Validation,
ICST ’14, pages 103–112. IEEE Computer Society.
Zamansky, A. and Farchi, E. (2015). Helping the tester get it
right: Towards supporting agile combinatorial test de-
sign. In Proc. of the Human-Oriented Formal Methods
workshop (HOFM 2015).
Zhang, J., Zhang, Z., and Ma, F. (2014). Introduction to
combinatorial testing. In Automatic Generation of
Combinatorial Test Data, pages 1–16. Springer.
A Human-centred Framework for Combinatorial Test Design
233