tion, which means that the internal camera parame-
ters are assumed known. Therefore in the sequel, any
vector in the camera rf. will be identiﬁed by its nor-
malized metric coordinates.
The set of sensors within the AHRS provides the
parameters of the 3D rotation representing its orien-
tation with respect to the local East-North-Up (ENU)
rf. In aeronautical notation 3D rotations are usually
parameterized using the YXZ Euler-angles represen-
tation, which factorizes the attitude rotation as a se-
quence of three rotations about the coordinate axis:
R
ENU→AHRS
(α, β, γ) = R
Y
(α)R
X
(β)R
Z
(γ) , where
(γ, β, α) are denoted as Heading, Pitch and Roll re-
spectively (Fig.1).
Figure 1: A plane tangent to the hypothetical spheroid sur-
face contains the local East North directions, which set
those axis for the East-North-Up reference frame.
Let us assume we are given a camera rigidly attached
to an AHRS, so that we can express the camera atti-
tude as a composition of a two rotations:
R
ENU→Cam
= R
AHRS→Cam
R
ENU→AHRS
. (1)
Our objective is to estimate R
AHRS→Cam
, which rep-
resents the unknown in the previous relation. Such a
rotation could be easily estimated from a set of N pairs
of corresponding vectors V = {v
AHRS
i
, v
Cam
i
}
i=1..N
,
where any pair represents the same direction in 3D
space, expressed as vector in the AHRS and in the
camera rf. respectively. Once the set V is known, the
rotation R
AHRS→Cam
is simply given by the solution
to the minimization problem:
R
AHRS→Cam
= min
R ∈SO(3)
∑
i
v
Cam
i
− R v
AHRS
i
2
. (2)
This is a classical problem of rotation ﬁtting, widely
studied in literature, which can be solved using sev-
eral different methods. In this work we used the
approach based on the Singular Value Decomposi-
tion (SVD), described in (Kanatani, 1994). What is
needed is a procedure to build the set V, identifying
those 3D directions which may be estimated in a reli-
able way both in the camera and the AHRS rf. The so-
lution proposed in (Lobo and Dias, 2003; Alves et al.,
2003; Lobo and Dias, 2007) and revisited in this pa-
per, involves the exploitation of the vertical direction,
denoted for clarity by the vector UP
ENU
= [0, 0, 1]
T
.
In the original paper the authors used the projection
of scene vertical lines, retrieved from a calibration
checkerboardplaced vertically. We notice that this ap-
proach may result in a poor estimation due to the nu-
merical instability of vanishing points measurement
and to the difﬁculty to reach a high accuracy in the
positioning of the calibration object. Instead, we ob-
serve that the vertical direction can be represented by
any object hanging by a thread, only undergoing ac-
tion of gravity. Therefore, as an alternative solution
we propose to use different geometrical entities in-
ferred from the projectivegeometry of single axis mo-
tions, which can be robustly estimated from the tracks
left on the image plane during the object revolution
motion.
The full procedure is performed into two main
steps. In the ﬁrst one we build the set V by iterat-
ing a basic processing unit which is performed with
ﬁxed acquisition system and rotating calibration ob-
ject. Any iteration is performed with different orien-
tation of the acquisition system and provides a pair
of corresponding vectors (UP
Cam
, UP
AHRS
). The vec-
tor UP
Cam
is computed from the different geometri-
cal entities inferred by the assumption of single axis
motion (two different technique will be presented),
while UP
AHRS
is trivially given by the third column
of the AHRS orientation matrix R
ENU→AHRS
, directly
computed from the attitude angles Heading, Pitch and
Roll. The second step is the SVD-based estimation
of the rotation best describing the transformation be-
tween the vectors sets.
It is worth to underlinethat the pose of the camera-
AHRS system is not changed during one iteration of
the calibration procedure, then the orientation mea-
surement provided by the AHRS is constant up to
an additive noise, assumed approximately white zero
mean Gaussian. Under this assumption the Maximum
Likelihood estimation of R
ENU→AHRS
is simply ob-
tained by averaging the set of the collected orienta-
tion matrices. The averaging operation over the group
SO(3) is perfomed by means of the QR decomposi-
tion, as described in (Moakher, 2002).
VISAPP 2009 - International Conference on Computer Vision Theory and Applications
14