chronicle, events are ordered and temporal orders of
events are quantiﬁed with numerical bounds (Sellami
et al., 2019).
5.1.1 Classiﬁcation of Failure Criticality using
ECM
After obtaining the chronicles, we then generate syn-
thetic data for the estimated maintenance cost. To do
this, the maintenance cost is generated as uniformly
distributed random numbers between [0,100]. In the
generated data, each value of maintenance cost is as-
sociated with a failure, indicating the estimated main-
tenance cost caused by the failure. In addition to the
temporal constraints of failures, maintenance cost is
considered as the second descriptor for the failure crit-
icality. The third step is to apply ECM on the syn-
thetic data set, for determining the criticality of fail-
ures based on their temporal constraints and estimated
maintenance cost. Following the evidential cluster-
ing approach introduced in Section 4.1, we obtained
the ﬁnal level of criticality of the failures described
in chronicles. At last, the extracted frequent chroni-
cles are transformed into SWRL predictive rules (us-
ing Algorithm 1), and the ECM classiﬁcation results
are also formalized by these rules. The following sub-
sections introduce the different steps in our experi-
mentation in details.
Table 1 shows the 10 failure chronicles (FC)
which have the highest chronicle support (CS) among
all extracted ones. In this ﬁgure, the numeric values
of the minimum time duration (Min
T D
, time unit: sec-
ond) among the last normal events and the failures,
the EMC for each chronicle, and the pignistic proba-
bility of the ﬁnal criticality (PPFC) are presented. For
the classiﬁcation results, the ﬁnal level of a failure’s
criticality is shown inside the brackets within the last
column of the table.
5.1.2 The Generation of SWRL Rules based on
Chronicles and ECM Results
To formalize the failure classiﬁcation results and to
predict the criticality of future failures, we generated
SWRL rules based on the obtained chronicles and
ECM classiﬁcation results. To do this, Algorithm 1
was used to transform the failure chronicles and ECM
classiﬁcation results into predictive SWRL rules. Fig.
2 presents an example SWRL rule that was generated
following our approach.
To evaluate the quality of the SWRL rules, two
measures are computed. The ﬁrst measure is Accu-
racy. It is computed by Equation 11, where n
rc
is the
number of training examples that are covered by a rule
R and belonging to the class C. n
r ¯c
is the number of
training examples that are covered by a rule R but not
belonging to the class C. The second measure is Cov-
erage, which is computed by Equation 12. Within it,
n
¯rc
the number of training examples that are not cov-
ered by a rule R but belonging to the class C.
Accuracy(R) =
n
rc
n
rc
+ n
r ¯c
. (11)
Coverage(R) =
n
rc
n
rc
+ n
¯rc
. (12)
We use the above two equations to obtain the aver-
age value of Accuracy and Coverage for the SWRL
rules. Table 2 presents the two measures under differ-
ent chronicle support. We can observe from the table
that as the chronicle support increases, the accuracy of
rules also increases. It is reasonable since as the min-
imum threshold of extracted chronicles increases, we
obtain more relevant chronicles. On the other hand,
as the number of extracted rules decreases, the se-
quences that are covered by the rules decreases. This
is the reason why the average value of coverage shows
a downtrend.
5.2 Experimentation on a Real-world
Data Set
To evaluate the performance of the prediction and fail-
ure classiﬁcation, we apply ECM on a real-world data
set. The real-world data set is called SECOM (Dua
and Graff, 2017), which contains measurements of
features of semi-conductor productions within a semi-
conductor manufacturing process.
We ﬁrst compute the hard credal partition on the
SECOM data set. In total, at most 2
Θ
focal sets could
be obtained through credal partition, where Θ is the
frame of discernment. In our experimentation, Θ rep-
resents the three levels of failure criticality. For the
SECOM data set, we only use temporal constraints
of failures as the descriptor for criticality. The data
points on the empty set which have the highest masses
are removed as outliers before they are assigned to the
clusters.
Fig. 3 shows the hard credal partition computed
on the SECOM data set with the following parame-
ters: α = 1,β = 2, δ = 10, and ε = 10
−3
. As results,
6 focal elements are obtained, including the universal
set Θ
ω
= {ω
l
,ω
lm
,ω
m
,ω
mh
,ω
h
}. Each subset of Θ
ω
is represented by the convex hull. Among them, ω
l
is the focal set representing the low criticality class,
ω
m
is the focal set representing the medium critical-
ity class, and ω
h
is the focal set representing the high
criticality class. ω
lm
is the hesitation between the ω
l
and ω
m
classes, which is {ω
l
,ω
m
}. ω
mh
is the hesi-
tation between the ω
m
and ω
h
classes, which means