Network (RNN), and Autoencoder (AE) with Soft-
max function. For simplicity, we define 1-MLP and
2-MLP for MLP with one and two hidden layers, re-
spectively. MLP is a class of feedforward neural net-
works. In addition to the input and output layers,
MLP contains some hidden layers, and neurons in two
adjacent layers are interconnected.
RNN leverages sequential information of the in-
put data from the previous step and feeds it as input to
the next step, which is beneficial to recognizing pat-
terns of time series data like text and speech recog-
nition. AE learns a good representation of input data
and is suitable for dimension reduction. AE extracts
features from the input data and generates the reduced
representations that can reconstruct the original data.
An AE model contains an encoder to explore features
and a decoder to reconstruct input data. By running
with a Softmax function at the output of the encoder,
an AE model can perform data classification.
The upper rectangle in Figure 1 shows the imple-
mentation of the Training phase. As illustrated in
the figure, the default methods for Data Preprocess-
ing are ‘untransformed,’ ‘FFT,’ and ‘DWT,’ where the
‘Untransformed’ means no preprocessing is required.
Data will be forwarded to the next phase as it is. The
default classification models for Model Training in
Phase I include 1-MLP, 2-MLP, RNN, and AE (Soft-
max). An administrator can extend the preprocessing
methods and classification models listed in Phase I as
needed.
3.3 Phase II: Evaluation
AFDM mainly targets ranking and recommending the
best-fit classification model to an administrator to an-
alyze and predict faults of machine tools. Based on
the results of analyzing the raw signals collected from
the machine tools in the factory, AFDM makes rec-
ommendations to the administrator. Thus, AFDM
needs to be able to handle different types of signals
provided by different types of machine tools. For this
purpose, AFDM has to evaluate different classifica-
tion models’ performance (e.g., prediction accuracy)
and find the best-fit model for the specified machine
tool(s).
Then, AFDM evaluates the classification mod-
els trained in the previous phase. The Evaluation
phase contains two significant operations: Data Pre-
processing and Model Evaluation, as illustrated in
Figure 1. The Data Preprocessing operation in the
Training and Evaluation phases are the same. Signals
are forwarded to Model Evaluation as it is when ‘Un-
transformed’ is selected. Signals are processed and
forwarded to the next phase when ‘FFT,’ ‘DWT,’ or
other data preprocessing methods are selected. Com-
pared with other research, the Model Evaluation oper-
ation works similarly to the model testing operation in
other research. After testing the classification models
trained in Phase I with the preprocessed data, AFDM
calculates the performance for those trained models in
terms of different metrics, including accuracy (Acc),
precision (Pre), recall (Rec), f1-score (F1), training
time (Time
tr
), and testing time (Time
tst
) (Ali et al.,
2017; Mehdiyev et al., 2016).
The first four metrics, Acc, Pre, Rec, and F1, are
defined by the confusion matrix for a two-class clas-
sification problem. The training time Time
tr
is the
computation time required for training and tuning a
classification model. The testing time Time
tst
is the
computation time required for making a single pre-
diction. AFDM uses these metrics to evaluate and
rank the candidates of classification models trained in
Phase I.
3.4 Phase III: Selection
According to the evaluation results obtained in Phase
II, AFDM can rank the classification models trained
in Phase I. The Selection phase defines two opera-
tions: Model Ranking and Model Selection. Model
Ranking ranks the classification models by the met-
rics defined in the Evaluation phase and the pref-
erences specified by a factory administrator. Since
AFDM ranks models with multiple metrics, Phase III
deals with an MCDM problem, so we cannot simply
apply a sorting algorithm to rank these models. Some
algorithms, like Analytic Hierarchy Process (AHP),
Adjusted Ratio of Ratios (ARR), and Technique for
Order of Preference by Similarity to Ideal Solution
(TOPSIS). This research adopts TOPSIS in AFDM to
solve such an MCDM problem. Conceptually, TOP-
SIS selects a positive ideal (best) solution and a neg-
ative ideal (worst) solution for each criterion (metric)
and then ranks each candidate solution with its Rela-
tive Closeness (RC). The definition of RC is:
RC =
S
∗
S
∗
+ S
−
,0 ≤ RC ≤ 1. (1)
The equation defines a ratio of the distance of the can-
didate to the positive ideal solution (S
∗
) and the dis-
tance to the negative ideal solution (S
−
). A higher
RC represents a better solution, which should have a
higher ranking. With TOPSIS, AFDM can rank the
classification models and generate an ordered list of
models (ranking). After obtaining the list, the admin-
istrators can select the best-fit classification model for
their factory according to the ranking, experience, or
other considerations. The rectangle of Phase III in
ICINCO 2022 - 19th International Conference on Informatics in Control, Automation and Robotics
52