and the values of S and F are found by solving
∂
∂S
J(S, F ) = 0 and
∂
∂F
J(S, F ) = 0 (25)
at each frequency and ensuring that it is a global max
imum. If F is ﬁxed at 1 then any solution, S
∗
which
solves (25), so that
dJ(S
∗
, 1)
dS
= 0 (26)
will also solve the optimisation using
J
S
F
=
F 
SG
0

Z
A
p − 1
α
p −
F
SG
0
δp (27)
since only a substitution of variables has been applied.
S and F can assume any values as long as
S
F
= S
∗
.
If F is now ﬁxed at
ˆ
F , the corresponding value of
ˆ
S
is
ˆ
F S
∗
. This must also solve the optimisation using
(24) as this is simply (27) multiplied by a constant.
Therefore it is enough to solve the optimisation using
(24) with F = 1 and simply substitute
ˆ
S =
ˆ
F S
∗
to obtain the solution for every other value of F . The
resulting cost is
ˆ
J = 
ˆ
F J
∗
where J
∗
is the cost using
F = 1. In order to incorporate other considerations in
the cost, let us consider the change in input from one
trial to the next. Since this is given by
u
k+1
− u
k
= (F − 1)u
k
+ Se
k
(28)
which, with the repeated application of (3), becomes
u
k+1
− u
k
= S (F − SG
0
U)
k
e
0
+
S(1 − F )
k−1
X
i=0
(F − SG
0
U)
i
y
d
+ (F − 1)u
k
(29)
and the residual error is given by
e
k
= (F − SG
0
U)
k
e
0
+(1−F )
k−1
X
i=0
(F − SG
0
U)
i
y
d
(30)
It can be seen that reducing F from 1 therefore has
the effect of reducing the cost (24) with the compro
mise of a likelihood of increased residual error and
input change. To tackle these effects directly for an
arbitrary e
0
and y
d
, it is required that each term in
(29) and (30) has a small modulus for each frequency
considered. Assuming that (4) is satisﬁed it remains
to reduce S and also the bound, λ, on the remaining
term which is given by
(1 − F )
P
k−1
i=0
(F − SG
0
U)
i
y
d
<
1−F
1−(F −SG
0
U)
= λ
(31)
This can be achieved using the mapping technique
that has already been described. Using this, we ﬁnd
that at a point in the uncertainty space, p, the bound λ
equals
1 − F
SG
0
p +
1−F
SG
0
=
1
SG
0
1−F
p + 1
(32)
Therefore the functions Q(S
−1
) and
R
SG
0
1−F
p + 1
can be incorporated into the
cost to limit the upper bound of the residual error
and change in successive inputs. Since these are
dependent on the plant and choice of y
d
, they will be
neglected in order to maintain focus on the general
case.
3.1 Experimental Test Facility
The experimental nonminimum phase testbed has
previously been used to evaluate a number of Repet
itive Control and ILC schemes (see (Freeman et al.,
2005) for details) and consists of a rotary mechanical
system of inertias, dampers, torsional springs, a tim
ing belt, pulleys and gears. An encoder records the
ouput shaft position and a standard squirrel cage in
duction motor drives the load. The system has been
modelled using a LMS algorithm to ﬁt a linear model
to a great number of frequency response test results.
The resulting continuous time plant transfer function
has thus been established as
G
0
(s) =
1.202(4 − s)
s(s + 9)(s
2
+ 12s + 56.25)
(33)
A PID loop around the plant is used in order to act as
a prestabiliser and provide greater stability. The PID
gains used are K
p
= 137, K
i
= 5 and K
d
= 3. The
resulting closedloop system constitutes the system to
be controlled.
4 EXPERIMENTAL RESULTS
In polar coordinates let us deﬁne the region
A = {re
jθ
 θ ∈ [θ
m
θ
M
], r ∈ [r
m
r
M
]} (34)
in the uncertainty space over which the probabil
ity function is valid. The parameters T = 6 and
n = 1024 are chosen for convenience to give fs =
1024/6. Let us solve the optimisation using (24) with
F = 1 and α = 0.2. Figure 4 shows a Bode plot of the
resulting S vectors when using θ
m
= −
π
6
, θ
M
=
π
6
,
r
m
= 0 and r
M
=
1+
T ω
2πλ
G
0

2
with λ = 4, 6, 8, 10.
These have been chosen using previous experience of
the plant uncertainty. As λ increases, r
M
decreases,
which, in turn, increases the magnitude of S at each
frequency as the plant effectively becomes less un
certain as A diminishes in size. Since θ
m
= −θ
M
,
ICINCO 2005  INTELLIGENT CONTROL SYSTEMS AND OPTIMIZATION
144