FILTERING AND COMPRESSION OF STOCHASTIC SIGNALS
UNDER CONSTRAINT OF VARIABLE FINITE MEMORY
Anatoli Torokhti and Stan Miklavcic
University of South Australia, Adelaide, Australia
Keywords:
Optimal filtering, Wiener filter.
Abstract:
We study a new technique for optimal data compression subject to conditions of causality and different types
of memory. The technique is based on the assumption that certain covariance matrices formed from observed
data, reference signal and compressed signal are known or can be estimated. In particular, such an information
can be obtained from the known solution of the associated problem with no constraints related to causality and
memory. This allows us to consider two separate problems related to compression and de-compression subject
to those constraints. Their solutions are given and the analysis of the associated errors is provided.
1 INTRODUCTION
A study of data compression methods is motivated
by the necessity to reduce expenditures incurred with
the transmission, processing and storage of large data
arrays. While the topics have been intensively stud-
ied (see e.g. (S. Friedland, 2006), (Jolliffe, 1986),
(Hua and Nikpour, 1999), (Hua and Liu, 1998),
(A. Torokhti, 2001), (Torokhti and Howlett, 2007),
(T. Zhang, 2001)), a number of related fundamen-
tal questions are still open. One of them concerns
specific restrictions associated with different types of
causality and memory.
First Motivation: Causality and Memory. Data
compression techniques mainly consist of three op-
erations, compression itself, de-noising and de-
compression (or reconstruction) of the compressed
data. Each operation is implemented by a special fil-
ter. In reality, a value of the output of such a filter at
time t
k
is determined from a ‘fragment’ of its input
defined at times t
k
,t
k1
,. .. ,t
kq
. In other words, in
practice both operations are subject to the conditions
of causality and memory.
Our first motivation comes from a real-time signal
processing. This implies that the filters we propose
should be causal with variable finite memory.
Second Motivation: Reformulation of the
Problem. Let (, Σ,µ) be a probability space, where
= {ω} is the set of outcomes, Σ a σ–field of mea-
surable subsets in and µ : Σ [0, 1] an associated
probability measure on Σ with µ() = 1.
In an informal way, the data compression prob-
lem we consider can be expressed as follows. Let y
L
2
(,R
n
) be observable data and x L
2
(,R
m
) be a
reference signal that is to be estimated from y in such
a way that, (a) the data y should be compressed to
a ‘shorter’ vector z L
2
(,R
r
)
1
with r < min{m,n}
and (b) z should be de-compressed (reconstructed) to
a signal
˜
x L
2
(,R
m
) that is close’ to x in some
appropriate sense. Both operations should be causal
and have variable nite memory. In this paper, the
term ‘close’ is used with respect to the minimum of
the norm (2) of the difference between x and
˜
x.
The problem can be formulated in several alter-
nate ways.
The first way is as follows. Let B : L
2
(,R
n
)
L
2
(,R
r
) signify compression so that z = B (y) and
let A : L
2
(,R
r
) L
2
(,R
m
) designate data de-
compression, i.e.,
˜
x = A (z). We suppose that B and
A are linear operators defined by the relationships
[B (y)](ω) = B[y(ω)] and [A (z)](ω) = A[z(ω)]
(1)
where B R
n×r
and A R
r×m
. In the remainder of
1
Components of z are often called principal components
(Jolliffe, 1986).
104
Torokhti A. and Miklavcic S. (2009).
FILTERING AND COMPRESSION OF STOCHASTIC SIGNALS UNDER CONSTRAINT OF VARIABLE FINITE MEMORY.
In Proceedings of the 6th International Conference on Informatics in Control, Automation and Robotics - Signal Processing, Systems Modeling and
Control, pages 104-109
DOI: 10.5220/0002206501040109
Copyright
c
SciTePress
this paper we shall use the same symbol to represent
both the linear operator acting on a random vector and
its associated matrix.
We define the norm to be
kxk
2
=
Z
kx(ω)k
2
2
dµ(ω) (2)
where kx(ω)k
2
is the Euclidean norm of x(ω). Let us
denote by J(A,B), the norm of the difference between
x and
˜
x, constructed by A and B:
J(A, B) = k x (A B)(y)k
2
. (3)
The problem is to find B
0
: L
2
(,R
n
) L
2
(,R
r
)
and A
0
: L
2
(,R
r
) L
2
(,R
m
) such that
J(A
0
,B
0
) = min
A,B
J(A, B) (4)
subject to conditions of causality and variable finite
memory for A and B. The problem consists of two
unknowns, A and B.
A second way to formulate the problem, that
avoids a difficulty associated with the two unknowns,
is as follows. Let F : L
2
(,R
n
) L
2
(,R
m
) be a
linear operator defined by
[F (y)](ω) = F[y(ω)] (5)
where F R
n×m
. Let rank F = r and
J(F) = kx F (y)k
2
.
Find F
0
: L
2
(,R
n
) L
2
(,R
m
) such that
J(F
0
) = min
F
J(F) (6)
subject to
rank F min{m, n} (7)
and conditions of causality and variable finite mem-
ory for F. Unlike (4), the problem (6)–(7) has only
one unknown.
2 STATEMENT OF THE
PROBLEM
The basic idea of our approach is as follows.
Let x L
2
(,R
m
), y L
2
(,R
n
) and z
L
2
(,R
r
), and let A and B be defined as (1) below.
Here, z is a compressed version of x. We assume
that information about vector z in the form of asso-
ciated covariance matrices can be obtained, in partic-
ular, from the known solution (Torokhti and Howlett,
2007) of problem (6)-(7) with no constraints associ-
ated with causality and memory.
In this paper, the data compression problem sub-
ject to conditions of causality and memory is stated in
the form of two separate problems, (8) and (10) for-
mulated below.
We use the following notation: M (r,n,η
B
) is a set
of causal r × n matrices B with a so-called complete
variable finite memory η
B
. The notation M (m, r,η
A
)
is similar.
Consider
J
1
(B) = kz B(y)k
2
.
Let B
0
be such that
J
1
(B
0
) = min
B
J
1
(B) subject to B M (r,n,η
B
).
(8)
We write z
0
= B
0
(y). Next, let
J
2
(A) = kx A(z
0
)k
2
(9)
and let A
0
be such that
J
2
(A
0
) = min
A
J
2
(A) subject to A M (m, r,η
A
).
(10)
We denote x
0
= A
0
(z
0
).
The problem considered in this paper is to find op-
erators B
0
and A
0
that satisfy minimization criteria (8)
and (10), respectively.
The major differences between the above state-
ment of the problem and the statements considered
below are as follows.
First, A and B should be causal with variable finite
memory.
Second, it is assumed that certain covariance ma-
trices formed from x, y and z are known or can
be estimated. In particular, such information can
be obtained from the known solution (Torokhti and
Howlett, 2007) of problem (6)-(7) with no constraints
associated with causality and memory. We note that
such an assumption does not look too restrictive in
comparison with the assumptions used in the associ-
ated methods (Hua and Nikpour, 1999)–(Torokhtiand
Howlett, 2007).
Consequently and thirdly, we represent the initial
problem in the form of a concatenation of two new
separate problems (8) and (10).
3 MAIN RESULTS
Let τ
1
< τ
2
< ··· < τ
n
be time instants and α,β, ϑ :
R L
2
(,R) be continuous functions. Sup-
FILTERING AND COMPRESSION OF STOCHASTIC SIGNALS UNDER CONSTRAINT OF VARIABLE FINITE
MEMORY
105
pose α
k
= α(τ
k
), β
k
= β(τ
k
) and ϑ
k
= ϑ(τ
k
) are
real-valued random variables having finite second
moments. We write x = [α
1
,α
2
,. .. ,α
m
]
T
y =
[β
1
,β
2
,. .. ,β
n
]
T
and z = [ϑ
1
,. .. ,ϑ
r
]
T
.
Let
˜
z be a compressed form of data y defined
by
˜
z = B(y) with
˜
z = [
˜
ϑ
1
,. .. ,
˜
ϑ
r
]
T
, and
˜
x be a de-
compression of
˜
z defined by
˜
x = A(
˜
z) with
˜
x =
[
˜
α
1
,. .. ,
˜
α
m
]
T
.
In many applications
2
, to obtain
˜
ϑ
k
for k =
1,...,r, it is necessary for B to use only a limited
number of input components, η
B
k
= 1,...,r. A num-
ber of such input components η
B
k
is here called a kth
local memory for B.
To define a notation of memory for the compres-
sor B, we use parameters p and g which are positive
integers such that 1 p n and nr+2 g n.
Definition 1. The vector η
B
= [η
B
1
,. .. ,η
B
r
]
T
R
r
is called a variable memory of the compressor B. In
particular, η
B
is called a complete variable memory if
η
B
1
= g and η
B
k
= n when k = ng+1,...,n. Here, p
relates to the last possible nonzero entry in the bottom
row of B and g relates to the last possible nonzero
entry in the first row.
The notation η
A
= [η
A
1
,. .. ,η
A
m
]
T
R
m
has a sim-
ilar meaning for the de-compressor A, i.e., η
A
is a
variable memory of the de-compressor A. Here, η
A
j
is the jth local memory of A.
The parameters q and s, which are positive inte-
gers such that 1 q r and 2 s m, are used
below to define two types of memory for A.
Definition 2. Vector η
A
is called a complete variable
memory of the de-compressor A if η
A
1
= q and η
A
j
= r
when j = s+ r 1, .. ., m. Here, q relates to the first
possible nonzero entry in the last column of A and s
relates to the rst possible nonzero entry in the rst
column.
The memory constraints described above imply
that certain elements of the matrices B = {b
ij
}
r,n
i, j=1
and A = {a
ij
}
m,r
i, j=1
must be set equal to zero. In this
regard, for matrix B with r p n, we require that
b
i, j
= 0
if j = p r+ i+ 1,...,n,
for
p = r,... ,n 1,
i = 1,. .. ,r
and
p = n,
i = 1,... ,r 1,
2
Examples include computer medical diagnostics (Gi-
meno, 1987) and problems of bio-informatics (H. Kim,
2005).
and, for 1 p r 1, it is required that
b
i, j
= 0
if
i = 1,...,r p,
j = 1,...,n,
and
i = r p+ 1,...,r,
j = i r+ p+ 1, .. .,n.
For matrix A with r p n, we require
a
i, j
= 0 (11)
ifj = q + i,...,rforq = 1,...,r 1,i = 1,...,r q,
and, for 2 s m, it is required that
a
i, j
= 0
if j = s+ i, .. ., r for s = 1,...,m, i = 1,...,s+ r 1,
The above conditions imply the following definitions.
Definition 3. A matrix B satisfying the constraint
(11)(11) is said to be a causal operator with the
complete variable memory η
B
= [g,g + 1,...,n]
T
.
Here, η
B
k
= n when k = n g + 1,..., n. The set of
such matrices is denoted by M
C
(r,n,η
B
).
Definition 4. A matrix A satisfying the constraint
(11)(11) is said to be a causal operator with the
complete variable memory η
A
= [r q + 1,...,r]
T
.
Here, η
A
j
= r when j = q, .. ., m. The set of such ma-
trices is denoted by M
C
(m,r,η
A
).
3.1 Solution of Problems (8) and (10)
To proceed any further we shall require some more
notation. Let
hα
i
,β
j
i =
Z
α
i
(ω)β
j
(ω)dµ(ω) < , (12)
E
xy
= {hα
i
,β
j
i}
m,n
i, j=1
R
m×n
,
y
1
= [β
1
,. .. ,β
g1
]
T
, y
2
= [β
g
,. .. ,β
n
]
T
, (13)
z
1
= [ϑ
1
,. .. ,ϑ
g1
]
T
and z
2
= [ϑ
g
,. .. ,ϑ
n
]
T
.
(14)
The pseudo-inverse matrix (Golub and Loan,
1996) for any matrix M is denoted by M
. The symbol
O designates the zero matrix.
Lemma 1. (Torokhti and Howlett, 2007) If we de-
fine w
1
= y
1
and w
2
= y
2
P
y
y
1
where
P
y
= E
y
1
y
2
E
y
1
y
1
+ D
y
(I E
y
1
y
1
E
y
1
y
1
) (15)
with D
y
an arbitrary matrix, then w
1
and w
2
are mu-
tually orthogonal random vectors.
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
106
Let us first consider problem (8) when B has the
complete variable memory η
B
= [g,g+1,..., n]
T
(see
Definition 3).
Let us partition B into four blocks K
B
,L
B
, S
B1
and
S
B2
so that B =
K
B
L
B
S
B1
S
B2
, where
K
B
= {k
ij
} R
n
b
×(g1)
is a rectangular matrix,
L
B
= {
ij
} R
n
b
×n
b
is a lower triangular matrix,
S
B1
= {s
(1)
ij
} R
(rn
b
)×(g1)
,
S
B2
= {s
(2)
kl
} R
(rn
b
)×n
b
are rectangular matrices, and n
b
= n g+ 1.
We have B(y) =
T
B
(w
1
) + L
B
(w
2
)
S
B
(w
1
) + S
B2
(w
2
)
, where T
B
=
K
B
+ L
B
P
y
and S
B
= S
B1
+ S
B2
P
y
. Then
J
1
(B) = J
(1)
(T
B
,L
B
) + J
(2)
(S
B
,S
B2
), (16)
where J
(1)
(T
B
,L
B
) = kz
1
[T
B
(w
1
) + L
B
(w
2
)]k
2
,
J
(2)
(S
B
,S
B2
) = kz
2
[S
B
(w
1
)+ S
B2
(w
2
)]k
2
. By anal-
ogy with Lemma 37 in (Torokhti and Howlett, 2007),
min
BM (r,n,η
B
)
J
1
(B) = min
T
B
,L
B
J
(1)
(T
B
,L
B
) + min
S
B
,S
B2
J
(2)
(S
B
,S
B2
).
Therefore, problem (8) is reduced to finding matrices
T
0
B
, L
0
B
, S
0
B
and S
0
B2
such that
J
(1)
(T
0
B
,L
0
B
) = min
T
B
,L
B
J
(1)
(T
B
,L
B
) (17)
and
J
(2)
(S
0
B
,S
0
B2
) = min
S
B
,S
B2
J
(2)
(S
B
,S
B2
). (18)
Taking into account the orthogonality of vectors
w
1
and w
2
and working in analogy with the argument
on pp. 348–352 in (Torokhti and Howlett, 2007), it
follows that matrices S
0
B
and S
0
B2
are given by
S
0
B
= E
z
2
E
w
1
w
1
+ H
B
(I E
w
1
w
1
E
w
1
w
1
) (19)
and
S
0
B2
= E
z
2
E
w
2
w
2
+ H
B2
(I E
w
2
w
2
E
w
2
w
2
), (20)
where H
B
and H
B2
are arbitrary matrices.
Next, to find T
0
B
and L
0
B
we use the following no-
tation.
For r = 1,2,. .. ,, let ρ be the rank of the matrix
E
w
2
w
2
R
n
2
×n
2
with n
b
= n g+ 1, and let
E
w
2
w
2
1/2
= Q
w,ρ
R
w,ρ
(21)
be the QR-decomposition for E
w
2
w
2
1/2
where Q
w,ρ
R
n
2
×ρ
and Q
w,ρ
T
Q
w,ρ
= I and R
w,ρ
R
ρ×n
2
is upper
trapezoidal with rank ρ. We write G
w,ρ
= R
w,ρ
T
and
use the notation G
w,ρ
= [g
1
,. .. ,g
ρ
] R
n
2
×ρ
where
g
j
R
n
2
denotes the j-th column of G
w,ρ
. We also
write G
w,s
= [g
1
,. .. ,g
s
] R
n
2
×s
for s ρ to denote
the matrix consisting of the first s columns of G
w,ρ
.
For simplicity, let us denote this G
s
:= G
w,s
. Next, let
e
1
T
= [1,0,0,0,. ..], e
2
T
= [0,1,0,0,. ..], e
3
T
=
[0,0,1,0,. ..], etc. denote the unit row vectors irre-
spective of the dimension of the space.
Finally, any square matrix M can be written as
M = M
+ M
where M
is lower triangular and M
is strictly upper triangular. We write k · k
F
for the
Frobenius norm.
Theorem 1. Let B M
C
(r,n,η
B
), i.e., the compres-
sor B is causal and has the complete variable mem-
ory η
B
= [g,g+ 1, .. .,n]
T
. Then the solution to prob-
lem (8) is provided by the matrix B
0
, which has the
form B
0
=
K
0
B
L
0
B
S
0
B1
S
0
B2
, where the blocks K
0
B
R
n
b
×(g1)
, S
0
B1
R
(rn
b
)×(g1)
and S
0
B2
R
(rn
b
)×n
b
are rectangular, and the block L
0
B
R
n
b
×n
b
is lower
triangular. These blocks are given as follows. The
block K
0
B
is given by
K
0
B
= T
0
B
L
0
B
P
y
(22)
with
T
0
B
= E
z
1
w
1
E
w
1
w
1
+ N
B1
(I E
w
1
w
1
E
w
1
w
1
) (23)
where N
B1
is an arbitrary matrix. The block L
0
B
=
λ
0
1
.
.
.
λ
0
n
b
, for each s = 1, 2,...,n
2
, is defined by its
rows
λ
0
s
= e
s
T
E
z
1
w
2
E
w
2
w
2
G
s
G
s
+ f
s
T
(I G
s
G
s
) (24)
with f
s
T
R
1×n
2
arbitrary. The blocks S
0
B1
and S
0
B2
are given by
S
0
B1
= S
0
B
S
0
B2
P
y
(25)
and (20), respectively. In (25), S
0
B
is presented by
(19). The error associated with the compressor B
0
is
given by
kz B
0
yk
2
=
ρ
s=1
n
2
j=s+1
|e
s
T
E
z
1
w
2
E
w
2
w
2
g
j
|
2
+
2
j=1
kE
z
j
z
j
1/2
k
2
F
2
i=1
2
j=1
kE
z
i
w
i
E
w
j
w
j
†1/2
k
2
F
. (26)
Let us now consider problem (10) when the de-
compressor A has the complete variable memoryη
A
=
[r q+ 1,...,r]
T
(see Definition 4).
FILTERING AND COMPRESSION OF STOCHASTIC SIGNALS UNDER CONSTRAINT OF VARIABLE FINITE
MEMORY
107
In analogy with our partitioning of matrix B, we
partition matrix A in four blocks K
A
,L
A
, S
A1
and S
A2
so that A =
K
A
L
A
S
A1
S
A2
, where
K
A
= {k
ij
} R
q×(rq)
is a rectangular matrix,
L
A
= {
ij
} R
q×q
is a lower triangular matrix, and
S
A1
= {s
(1)
ij
} R
(mq)×(rq)
,
S
A2
= {s
(2)
kl
} R
(mq)×q
are rectangular matrices.
Let us partition z
0
so that z
0
=
z
0
1
z
0
2
with z
0
1
L
2
(,R
rq
) and z
0
2
L
2
(,R
q
). We also write
x
1
= [α
1
.. ., α
rq
]
T
and x
2
= [α
rq+1
,. .. ,α
m
]
T
,
and denote by v
1
L
2
(,R
rq
) and v
2
L
2
(,R
q
),
orthogonal vectors according to Lemma 1 as
v
1
= z
0
1
and v
2
= z
0
2
P
z
z
0
1
,
where P
z
= E
z
1
z
2
E
z
1
z
1
+ D
z
(I E
z
1
z
1
E
z
1
z
1
) with D
z
an
arbitrary matrix.
We write G
v,s
= [g
1
,. .. ,g
s
] R
q×s
where G
v,s
is
constructed from a QR-decomposition of E
v
2
v
2
1/2
, in
a manner similar to the construction of matrix G
w,s
.
Furthermore, we shall define G
s
:= G
v,s
.
Theorem 2. Let A M
C
(m,r,η
A
), i.e. the de-
compressor A is causal and has the complete variable
memory η
A
= [r q+ 1,. .., r]
T
. Then the solution to
problem (10) is provided by the matrix A
0
, which has
the form A
0
=
K
0
A
L
0
A
S
0
A1
S
0
A2
, where the blocks K
0
A
R
q×(rq)
, S
0
A1
R
(mq)×(rq)
and S
0
A2
R
(mq)×q
are
rectangular, and the block L
0
A
R
q×q
is lower trian-
gular. These blocks are given as follows. The block
K
0
A
is given by
K
0
A
= T
0
A
L
0
A
P (27)
with
T
0
A
= E
x
1
v
1
E
v
1
v
1
+ N
A1
(I E
v
1
v
1
E
v
1
v
1
) (28)
where N
1
is an arbitrary matrix. The block L
0
A
=
λ
0
1
.
.
.
λ
0
q
, for each s = 1,2,...,q, is defined by its rows
λ
0
s
= e
s
T
E
x
1
v
2
E
v
2
v
2
G
s
G
s
+ f
s
T
(I G
s
G
s
) (29)
with f
s
T
R
1×q
arbitrary. The blocks S
0
A1
and S
0
A2
are
given by
S
0
A1
= S
0
A
S
0
A2
P, S
0
A2
= E
x
2
E
v
2
v
2
+ H
A2
(I E
v
2
v
2
E
v
2
v
2
),
(30)
where
S
0
A
= E
x
2
E
v
1
v
1
+ H
A
(I E
v
1
v
1
E
v
1
v
1
) (31)
and H
A2
and H
A
are arbitrary matrices.
The error associated with the de-compressor A
0
is
given by
kx A
0
z
0
k
2
=
ρ
s=1
q
j=s+1
|e
s
T
E
x
1
v
2
E
v
2
v
2
g
j
|
2
(32)
+
2
j=1
kE
x
j
x
j
1/2
k
2
F
2
i=1
2
j=1
kE
x
i
v
i
E
v
j
v
j
†1/2
k
2
F
. (33)
4 SIMULATIONS
The following simulations and numerical results illus-
trate the performance of the proposed approach.
Our filter F
0
= A
0
B
0
has been applied to compres-
sion, filtering and subsequent restoration of the refer-
ence signals given by the matrix X R
256×256
. The
matrix X represents the data obtained from an aerial
digital photograph of a plant
3
presented in Fig. 1.
We divide X into 128 sub-matrices X
ij
R
m×q
with i = 1,...,16, j = 1, .. ., 8, m = 16 and q = 32
so that X = {X
ij
}. By assumption, the sub-matrix
X
ij
is interpreted as q realizations of a random vec-
tor x L
2
(,R
m
) with each column representing a
realization. For each i = 1,...,16 and j = 1, .. ., 8,
observed data Y
ij
were modelled from X
ij
in the form
Y
ij
= X
ij
rand
(16,32)
(ij)
.
Here, means the Hadamard product and
rand
(16,32)
(ij)
is a 16× 32 matrix whose randomly-
chosen elements are uniformly distributed in the
interval (0,1).
The proposed filter F
0
has been applied to each
pair {X
ij
, Y
ij
}. Each pair {X
ij
, Y
ij
} was processed by
compressors and de-compressors with the complete
variable memory. We denote B
0
C
= B
0
and A
0
C
= A
0
for such a compressor and de-compressor determined
by Theorems 1 and 2, respectively, so that
B
0
C
M
T
(r,n,η
B
) and A
0
C
M
C
(m,r,η
A
)
3
The database is available in
http://sipi.usc.edu/services/database/Database.html.
ICINCO 2009 - 6th International Conference on Informatics in Control, Automation and Robotics
108
50 100 150 200 250
50
100
150
200
250
(a) Given reference signals.
50 100 150 200 250
50
100
150
200
250
(b) Observed data.
50 100 150 200 250
50
100
150
200
250
(c) Estimates of the reference signals by the filter
F
0
C
with the complete variable memory.
Figure 1: Illustration of simulation results.
where n = m = 16, r = 8, η
B
= {η
B
k
}
16
k=1
with η
B
k
=
12+ k 1, if k = 1,...,4,
16, if k = 5, ...,16
,
and η
A
= {η
A
j
}
16
j=1
with η
A
j
=
6+ j 1, if j = 1,2,
8, if k = 3,...,16
. In this case,
the optimal filter F
0
is denoted by F
0
C
so that
F
0
C
= A
0
C
B
0
C
. We write
J
0
C
= max
ij
kX
ij
F
0
C
Y
ij
k
2
for a maximal error associated with the filter F
0
C
over
all i = 1,...,16 and j = 1, .. .,8. The compression
ratio was c = 1/2. We obtained J
0
C
= 3.3123e+ 005.
The results of simulations a are presented in Fig.
1 (a) - (c).
REFERENCES
A. Torokhti, P. H. (2001). Optimal fixed rank transform of
the second degree. IEEE Trans. Circuits & Syst., II,
Analog & Digit. Signal Processing, 48(3):309–315.
Gimeno, V. (1987). Obtaining the eeg envelope in real time:
a practical method based on homomorphic filtering.
Neuropsychobiology, 18:110–112.
Golub, G. and Loan, C. V. (1996). Matrix Computation.
Johns Hopkins Univ. Press.
H. Kim, G.H. Golub, H. P. (2005). Missing value estimation
for dna microarray gene expression data: local least
squares imputation. Bioinformatics, 21:211–218.
Hua, Y. and Liu, W. Q. (1998). Generalized karhunen-lo`eve
transform. IEEE Signal Process Letters, 5:141–143.
Hua, Y. and Nikpour, M. (1999). Computing the reduced
rank wiener filter by iqmd. IEEE Signal Processing
Letters, 6(9):240–242.
Jolliffe, I. (1986). Principal Component Analysis. Springer
Verlag.
S. Friedland, A. Niknejad, M. K. H. Z. (2006). Fast monte-
carlo low rank approximations for matrices. Proc.
IEEE Conference SoSE, pages 218–223.
T. Zhang, G. G. (2001). Rank-one approximation to high
order tensors. SIAM J. Matrix Anal. Appl., 23.
Torokhti, A. and Howlett, P. (2007). Computational Meth-
ods for Modelling of Nonlinear Systems. Elsevier.
FILTERING AND COMPRESSION OF STOCHASTIC SIGNALS UNDER CONSTRAINT OF VARIABLE FINITE
MEMORY
109