
 
 
 
Figure 1: Experimental bench for cutting processing. 
The  cutting  tests  were  conducted  on  Mikron 
UCP800 Duro, which is a five axis machining centre. 
The thrust force was measured by a Kistler 9253823 
dynamometer.  The  resulting  signals  are  converted 
into output  voltages and then  these  voltage signals 
are  amplified  by  Kistler  multichannel  charge 
amplifier  5070.  Force  signals  were  simultaneously 
recorded by NI  PXIe-1802 data  recorder.  A  300M 
steel work piece material was adopted. The spindle 
speed  was  kept  constant  at  1000rpm  and  the  feed 
rate  was 400mm/min.  The  cutting  depth 2mm and 
wide 2mm  was adopted.  A real time cutting signal 
with a dull tool processing is shown as Figure 2. 
 
Figure 2: The cutting processing in time domain. 
Time  domain  signals  are  considered  as  general 
error  ±5%,  and  then  the  wavelet  packet 
decomposition is used. The root mean square values 
of  the  wavelet coefficients at  different  scales  were 
taken as the feature observations vector. The training 
procedure  for  finding  the  optimal  model  is  carried 
out.  The  convergence  curve  of  log  likelihood  is 
shown as Figure 3, and hence the convergence of the 
GHMM training process can be obtained. 
 
Figure 3: The training convergence curve. 
5  CONCLUSIONS 
Two  kinds  of  uncertainties  in  engineering 
application  can  be  encountered.  The  aleraory 
uncertainty  is  derived  by  the  probability  measure 
while the epistemic uncertainty is  modelled  by the 
generalized  interval  in  GHMM.  In  this  paper,  the 
generalized convex and concave functions based on 
the  generalized  interval  are  proposed  for  inferring 
the  generalized  Jensen  inequality.  An  optimization 
method for training GHMM,  as a generalization of 
Baum-Welch  algorithm,  is  proposed.  The 
observation  sequence  is  viewed  separately  as  the 
lower  and  the  bound  observation  sequences.  The 
generalized  Baum-Welch’s  auxiliary  function  and 
generalized  Jensen  inequality  are  used.  Similar  to 
HMM training, a set of training equations has been 
derived  by  optimizing  the  objective  function.  The 
lower and  upper bound re-estimated  formulas have 
been derived  by unique maximum of  the  objective 
function.  With  a  multiple  observation  concept,  a 
group  of  GHMM  re-estimation  formulas  has  been 
derived.  According  to  multiple  observations  EM 
algorithm,  this  method  guarantees  the  local 
maximum of the lower and upper bound and hence 
the convergence of the GHMM training process. 
ACKNOWLEDGEMENTS 
This  research  is  supported  by  National  Key  Basic 
Research  Program  of  China  (973  Program,  Grant 
No.2011CB706803), Natural Science Foundation of 
China  (Grant  No.  51175208),  and  Natural  Science 
Foundation of China (Grant No. 51075161). 
REFERENCES 
Jensen, J., L. W. V., 1906. Sur les functions convexes et 
les inegalites entre les valeurs moyennes, Acta Math, 
30: 175–193. 
Rabiner, L. R., 1989. A tutorial on hidden Markov models 
and  selected  applications  in  speech  recognition, 
proceedings of the IEEE, 77 (2): 257–286. 
Kaucher,  E.,  1980.  Interval  analysis  in  the  extended 
interval space IR. Computing Supplement, 2:33–49. 
Wang,  Y.,  2011.  Multiscale  uncertainty  quantification 
based on a generalized hidden Markov model. ASME 
Journal of Mechanical Design, 3: 1–10. 
Baum, L. E., Petrie, T, Soules, G., Weiss, Norman., 1970. 
A maximization technique occurring in the statistical 
analysis of  probabilistic  functions of Markov chains. 
Ann. Math. Statist, 41(1): 164–171. 
Li,  X.  L.,  Parizeau,  M.,  Plamondon,  R.,  2000.  Training 
hidden Markov models with multiple observations – A 
An Optimization Method for Training Generalized Hidden Markov Model based on Generalized Jensen Inequality
273