A Concept for Fast Indoor Mapping and Positioning in Post-Disaster
Scenarios
Eduard Angelats and José A. Navarro
Centre Tecnològic de Telecomunicacions de Catalunya (CTTC/CERCA),
Av. Carl Friedrich Gauss, 7. Building B4, 08860 Castelldefels, Spain
Keywords: RPAS Photogrammetry, Rapid Indoor Mapping, Indoor Positioning, 3d Modelling, Emergency Response,
Orientation, RGB-D Camera.
Abstract: This work presents an early concept for a low-cost, lightweight fast mapping and positioning system suitable
for civil protection and emergency teams working in post-disaster scenarios. Such a concept envisages
continuous, seamless tracking in both indoor and outdoor environments; this would be possible because of
the low geometric requirements set by emergency teams: knowing what is the floor and room where they are
located is enough. The authors believe that current technologies (both hardware and software) are powerful
enough to build such system; such an opinion is backed by an assessment of several currently available sensors
(IMUs, RGB-D cameras, GNSS receivers), embedded processors and mapping and positioning algorithms as
well as a feasibility study taking into account the several factors playing a role in the problem.
1 INTRODUCTION
The ultimate goal of the concept presented in this
paper is to increase the safety of civil protection and
emergency (CPE) teams working in post-disaster
(either natural or man-made) scenarios, as
earthquakes or fires. The members of these teams are
constantly exposed to situations that may put their
lives at (even death) risk. Such risk is significantly
increased due to the lack of knowledge of the
environment they work in - typically, confined spaces
as damaged buildings.
Increasing such knowledge should have a direct
impact on their security. Here, "knowledge" refers to
information about the places (either outdoors or
indoors) where these teams work. There exist, indeed,
solutions to first map an area and then track people in
booth indoor and outdoor enviroments. In the case of
outdoors, there are nowadays many companies
operating Remotely Piloted Aircraft Systems (RPAS)
producing high quality cartography very quickly.
Note, for example, that even the Copernicus
Emergency Management Service (CEMS) offers a
fast mapping service for emergencies using either
Sentinel imagery or RPAS with a response time of
about 48 hours (CEMS, 2015); locating someone
oudoors is also routinely performed: common
solutions rely on Global Navigation Satellite Systems
(GNSS) receivers or an hybridization of these with
Inertial Measuring Unit (IMU) sensors. There also
exist solutions for positioning in confined places, but
these rely on pre-deployed infrastructure (as WiFi
emitters/beacons or cameras, among others) that will
not be available when working in post-disaster
scenarios since these may be located anywhere.
Solutions for indoor mapping based usually on Light
Detection and Ranging (LiDAR) sensors carried
either by humans or even by terrestrial robots exist
(Kruijff-Korbayová, I. 2016), but these are not always
suitable to operate in post-disaster scenarios; walls
may have collapsed, debris or holes may impede their
normal operation and, in the case of human carriers,
operating in such situations is, doubtlessly,
dangerous.
Thus, it is possible to say that, in practice, a
suitable solution to track seamlessly the position of a
CPE team when moving from outdoors to indoors - or
viceversa - does not exist. As stated above, it is true
that solutions in general do exist, but their
dependance on some infrastructures make these
unsuitable for emergency and disaster management.
On the other side, it is the authors' belief that using
the current technologies and algorithms it is possible
to develop a system able to solve precisely this
problem for the specific case of CPE teams.
274
Angelats, E. and Navarro, J.
A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios.
DOI: 10.5220/0006780102740281
In Proceedings of the 4th International Conference on Geographical Information Systems Theory, Applications and Management (GISTAM 2018), pages 274-281
ISBN: 978-989-758-294-3
Copyright
c
2019 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
To focus the discussion aftewards, it is worth to
note that outdoor mapping and positioning is nothing
new nowadays. High levels of accuracy and precision
far exceeding the requirements of CPE teams are
achieved routinely. Therefore, and even though this
paper presents a seamless solution for mapping and
positioning in both indoor and outdoor environments,
the outdoors case will not be discussed here from the
mapping standpoint. Regarding positioning, the
problem is solved too, but here a concept for a
positioning device able to switch between outdoors
and indoors conditions will be presented.
Such concept targets at a solution to overcome the
difficulties stated above, relying on (1) a low-cost,
lightweight, unobtrusive, portable device (2) to be
carried as a payload by a special RPAS to map indoor
environments and (3) to be carried afterwards by CPE
members when inspecting the buildings. Note the
double role played by the aforementioned device:
mapper and tracker.
The sensors used to build such a device would be Red,
Green, Blue and Depth (RGB-D) cameras, IMU
sensors and GNSS receivers. Data fusion algorithms
would be the heart of the system. These would
combine raw observations coming from the IMU and
the visual odometry solution estimated from RGB-D
cameras imagery to provide indoor positioning. The
addition of a GNSS receiver immedately enables such
device for outdoor environments. All these
components (both hardware and software) should be
mounted on a light, battery powered System-On-Chip
(SOC) computer. Depending on the purpose
(mapping or positioning), a different algorithm would
be used.
Concerning the RPAS, not all of them would be
appropriate for the task of mapping the interior of
damaged buildings. Fixed-wing ones should be
immediately discarded for obvious reasons. But even
multi-copters may be severely damaged in case of
crash against a wall. There exist, however, RPAS
designed especifically to resist blows and crashes.
These have been successfuly used for other purposes
where flying in confined spaces was a must. See, for
instance (Flyability, 2017).
This paper will try to show that it is already
possible to build a low-cost fast 3D mapping and
positioning system targeted at the specific needs of
CPE teams using technology and algorithms already
available. One of the factors making this possible is
the low geometric (accuracy and precision) set by the
regular operation of CPE personnel; basically, their
members want to be sure about the floor and room
they are in. This would translate to accuracies around
1-2 metres and precisions between 30-50 cm. To do
it, sensors and algorithms will be presented and
assessed together with the drawbacks derived from
the specific work conditions usually present in the
target scenarios - as suboptimal lightning, or presence
of dust, usually found indoors in many kinds of
natural or man-made disasters. The architecture of the
mapping and positioning system will also be
discussed.
2 THE STATE OF THE ART
2.1 Latest Trends in RPAS Mapping
According to (Gomez and Purdie, 2016; Colomina
and Molina, 2014; Nex and Remondino, 2014), the
number of publications referring to RPAS as a
mapping tool has increased exponentially, reflecting
the rapid spread of this technology. Such tendency
played an important role in the significant reduction
of price undergone by this equipment. Additionally,
(Colomina and Molina, 2014) state that the
improvement of automation software together with
the decrease of price of processors and positioning
and remote sensing sensors are responsible of the
broad use of RPAS in the mapping arena. RPAS have
become a suitable tool for mapping, because of their
ability to carry a variety of mapping and positioning
payloads, as cameras, LiDAR, IMUs or GNSS
receivers (Giordan, 2017), so they may be used to
support systems targeted at emergency management
or hazard assessment. This may be done at local or
even regional scales (Giordan, 2017), which depends
on the kind of RPAS used, that is, fixed-wing or
multi-copter ones (Gomez and Purdie, 2016;
Boccardo, 2015). Last, but not least, one of the
advantages of using and RPAS as a mapping platform
is that these may be deployed very quickly. This
means that RPAS are very suitable for mapping land
features and, especially, their evolution over time or
after sudden, violent transformations as flooding or
volcanic eruptions. A non-exhaustive review of
RPAS mapping for in the context of natural disasters
may be found in (Gomez and Purdie, 2016).
Mapping emergency scenarios with the help of
RPAS is usual when producing outdoors cartography.
This technique is appreciated and valued by CPE and
Search & Rescue (SAR) teams (Boccardo, 2015;
Kruijff-Korbayová, 2016; Dominici, 2017;
Scaramuzza, 2014). On the contrary, RPAS are
almost unknown when working in indoor
environments. Some research projects are the
exception to the previous statement; (Kruijff, 2012;
Kruijff-Korbayová, 2016), in the context of the
A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios
275
projects NIfTi and TRADR, describe the use of aerial,
RGB imagery to assess the damages suffered by
buildings during the earthquakes that took place in
Emilia-Romagna (2012) and Umbria (2016). 3D
models were built in post-processing mode to perform
the task. Leaving the area of emergency management,
other mapping research projects exist. One of these is
the work of (Mur-Artal and Tardós, 2017) where a
variant of the Simultaneous Localization and
Mapping technique (SLAM) called ORB-SLAM is
presented.
With regard to algorithms and tools, a robust,
widely used technique to produce 3D models using
imagery obtained from Commercial Off-The-Shelf
(COTS) cameras mounted on RPAS is the so-called
Structure-from-Motion (SfM) approach (Nex and
Remondino, 2014). One of the advantage of using the
SfM approach is lower costs, especially when
compared to those incurred when using an RPAS and
a LiDAR sensor. Another advantage is the
availability of a variety of highly automated tools
such as Micmac (Pinte, 2017), Agisoft (Agisoft,
2017) or Pix4D (Pix4D, 2017). (Remondino, 2017)
shows that using GPU-enabled, modern processors
the time to collect and process data is noticeably
reduced, thus leading to processing times in the range
of a few hours, which fully aligns with the goal of
quickly producing quality cartography when
emergency management is the business.
LiDAR measurements may be combined with the
positions or the trajectory to derive, in real-time either
3D models or Digital Surface Models (DSM). The
geometric quality of the overlapping point cloud
strips is directly influenced by the errors that may be
present in the positions in the trajectory. To improve
the quality of both the absolute and relative
orientations and, consequently, that of the point
clouds registration, a post-processing step is proposed
in (Glira, 2016)
2.2 The State of the Art in Positioning
and Mapping
In spite of being a tool widely used lately by most of
the applications needing precise and robust
positioning, the well-known GNSS have some
drawbacks, such as the need of good environment
conditions, as clear lines of sight; confined spaces or
deep canyons (either natural or urban) are the typical
environments where GNSS receivers are not the best
technology for achieving precise positioning results.
As already mentioned in section 1 and in the specific
case of indoor environments, this limitation is usually
overcome (Dardari, 2015) by means of the ad-hoc
deployment of different kinds of emitters (as Wi-Fi,
ultra-wideband or even visual beacons) that are
complemented with the corresponding receiver,
which, aided by the appropriate algorithms is able to
estimate a solution. The emitters play the role of
landmarks - also known as anchor nodes - since these
have been deployed in known positions. This is not,
however, the only approach used indoors.
(Leutenegger, 2013; Veth, 2011) describe the use of
combined IMU data and other sensors (as monocular,
stereo or RGB-D cameras).
When using imaging sensors, it is possible to
compute their orientation parameters by measuring
tie-points in consecutive images that are conveniently
overlapped. The extraction, description and matching
of these tie-points have been the subject of quite a
number of research works proposing several
algorithms performing the aforementioned tasks in a
robust way. Some of these (Lowe, 1999;
Leutenegger, 2011) are, ideally, invariant to changes
in illuminations conditions, orientation and scale.
RANdom SAmple Consensus (RANSAC)
procedures are usually combined with these
algorithms in order to detect and remove outliers; this
is done (Hartley and Schaffalitzky, 2004; Nister,
2003) using only image observations by means of
position and attitude estimation. (Veth, 2011; Taylor,
2011) combine, instead, inertial data and derived
trajectories to predict the appearance of already
detected features in new images.
Removing outliers, it is possible to rebuild the
trajectory by means of the concatenation of the
estimated relative positions and orientations using k
inliers from overlapping images. (Hartley, 2000;
Scaramuzza, 2011; Forster, 2017) review several
methods to estimate the navigation states using
images with or without extra object observations. The
robotics and computer vision communities refer to
these approaches as SfM or visual odometry. These
are called SLAM when drifts are reduced by means
of reference maps or loop closures are detected.
Another line of though (Taylor, 2011), present two
strategies, both using an Unscented Kalman Filter and
an IMU as the main positioning sensor; its drift is
controlled by means of visual information. The first
approach use image coordinates to set geometric
constraints while the second estimates the navigation
states at the same time than object coordinates.
GISTAM 2018 - 4th International Conference on Geographical Information Systems Theory, Applications and Management
276
3 THE CONCEPT
3.1 User Requirement and Scenarios
The concept for the system presented in this paper is
oriented at helping the CPE teams to manage and
assess post-disaster situations, especially in those
cases where the damages inflicted to buildings is
severe enough as to put the lives of the personnel
having to intervene at risk of losing their lives. Not all
the disaster scenarios are targets for this concept, but
at least the following are: volcanic eruptions,
landslides, earthquakes, fires, flooding, severe
storms. In all cases, the system would be used once
the emergency is over.
The proposed system (concept) would not be an
independent one; it would complement the
technologies and procedures that already exist, thus
improving - or trying to improve - the management of
emergencies, including the plans devised to handle
these. From the temporal standpoint, such a system
would be used during the intervention phase, that is,
once the emergency itself (fire, earthquake or any
other of the aforementioned situations) has finished.
The idea, is to increase the safety of the members of
the CPE teams, as well as to improve the resources
available to the coordination personnel thanks to an
assessment of the damages suffered by the buildings
where these people have to work (for instance, being
able to tell where the risk of collapsing walls is
greater). This is an indirect way to say that the goal is
reduce the risk these teams are exposed to, risks that
sometimes take its toll on human lives. An extra
benefit obtained from such a system would be the
ability to help in building rehabilitation (thanks to the
3D models obtained during the intervention phase).
The system should be able to map the disaster area
in only a few hours, both outdoors and indoors.
Outdoors quality RPAS mapping is a service offered
by many companies as a regular product. Among
these, the Copernicus Emergency Management
Service; therefore, it will not be discussed here.
But when talking about indoors and emergency
management specifically, neither the positioning nor
the mapping issues are well solved. In fact, many
problems may affect the ability to create indoor maps
in these environments. The authors, aware of these
difficulties, restrict the scope of applicability of their
work only to those cases in which the lightning and
texture conditions are suitable for the kind of sensors
integrating the system - namely, RGB-D cameras.
Thus, when possible, the system would produce
3D models of the damaged buildings, so the CPE
teams know what they should expect when entering
there. 2D floorplans are another possible output. An
interesting discussion arising here is the quality level
needed to produce these models; unlike other
applications that rely on strict accuracy and precision
requirements, emergency teams typically need just to
be aware of their surroundings - that is, whether a
dangerous wall that could collapse is around, if a
staircase leading up or down is available or if the floor
they should step on still exists, for instance - and what
is the room and floor they are in. Such relaxed
requirements make possible the concept to map and
afterwards track personnel indoors described in this
work. Nonetheless, the authors consider that
minimum accuracy and precision requirements
should be stated in order to assess the suitability of
the system: 3 to 5 decimetres for precision and 1 to 2
meters for accuracy. Such requirements, moreover,
allow for a representation of the buildings close
enough to reality - so it is useful.
The availability of the 3D models and 2D
floorplans opens the way to track the CPE teams and
pinpoint their positions once they enter the buildings.
This information (position) combined with the now
available knowledge about the environment (holes,
collapsed walls, availability of emergency exits, etc.)
produces invaluable information for the member of
both intervening and coordination teams. Again,
some precision requirements should be set for the
indoor positioning: below 10 meters. It is not
possible, however, to assess accuracy since no
reliable reference to compare to exist. Positions
should be updated at least once per second (1 Hz) to
effectively track the teams.
3.2 The Hardware
The system devised by the authors to implement both
the mapping and tracking devices (or mapping
payload and portable positioning device,
respectively) will sport the same hardware
components. A battery powered SOC board will be
used to run the required algorithms (either positioning
or mapping) and to integrate the required sensors.
These will be and RGB-D camera plus an integrated
GNSS / IMU module. This last module could be the
ublox NEO-M8U (ublox, 2017) or a similar one. It
has been chosen because its ability to work in two
different modes; when a GNSS signal is available, it
delivers a position with a frequency of 2 Hz (twice the
required frequency). This would be the typical
outdoor scenario, where GNSS is, normally, not a
problem. But when moving indoors, this module
switches to the untethered dead reckoning mode and
delivers linear accelerations and angular velocities
A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios
277
instead of position information - that is, the IMU data.
The frequency is 20 Hz. This behaviour matches the
operational mode that should be implemented in the
mapping and tracking devices.
With regard to cameras, it is worth to note that the
evolution of this technology has produced active
RGB-D cameras able to take depth measurements in
extreme lightning conditions - that is, 0 lux or no light
at all. In such cases, the operating distance is
drastically reduced, so, from the operational
standpoint, data should be captured at much closer
ranges. It is true that the accuracy and precision are
also significantly worse; the key point here, however,
is that using one of such active cameras instead of
passive ones makes possible to operate the system
when the illumination and texture requirements set in
section 3.1 are not matched. Obviously, the complete
absence of light (the 0-lux condition) may be too
extreme for a pilot to be able to fly the RPAS; albeit,
the idea is that even when the light is very poor, the
system may be operated and results that may help to
save lives, be obtained. Two cameras already
available in the market would be good candidates for
this system: the Intel© RealsenseTM (Keselman et
al., 2017) and Microsoft Kinect 2 (Lachat et al.,
2015). Unfortunately, the Kinect must be discarded
because of excessive power consumption reasons
(section 4 presents a feasibility analysis, and power
consumption is one of the factors to take into
account).
Finally, an unobtrusive SOC (also with low power
requirements) would be desirable to complete (and
integrate) the set of components making the system.
Its task, to provide with the necessary computing
resources - positioning version - and storage capacity
(mapping version, sensor data must be saved). The
word unobtrusive above means lightweight and small
footprint in this context, since the positioning device
must not be a nuisance to its wearers. Obviously, the
low consumption requirements lead to longer
operational times, thus reducing the need to replace
batteries so often. From the computing power
standpoint, a powerful GPU is a must, due to the
image processing computations that must take place
in real time. A possible candidate for the SOC is the
NVIDIA Jetson TX2 (Franklin, 2017).
3.3 Operating the System
From the operational standpoint, using the system in
indoor environments implies going through three
main steps, namely data collection, map generation
and actual intervention.
First of all, data must be collected using the
mapping payload on board the RPAS. As already
stated in section 3.2, the integrated GNSS + IMU
module will provide GNSS-based data while
operating outdoors, so the RPAS position and attitude
will be computed relying on this information. The
camera plays no role at this stage (since the goal of
this procedure is to produce indoor cartography). It is
important to collect this information since it will be
used as (presumably good) initial approximations for
the attitude and position of the RPAS when it enters
the building and the GNSS signal disappears. As soon
as this happens, the system will activate the RGB-D
sensor and the GNSS + IMU module will start
delivering linear accelerations and angular velocities.
It should not be forgotten that the imagery and IMU
data stored to later produce the 3D models (and 2D
floorplans) must be correctly time-tagged. When
leaving the building, the GNSS-based operations
mode is reactivated, so, again, more precise position
and attitude data will be available, helping to improve
the final quality of the mapping process. The RPAS
itself deserves a few words; usual RPAS are not
suited to operate in confined spaces because of the
high risk of crashing against the walls or any debris
that may be present inside the building. Such crashes
or blows may damage - and therefore, disable - the
RPAS itself. To overcome such difficulties,
multicopters adapted to fly in such circumstances
exist. Flyability's (Flyability, 2017) is one example of
RPAS used successfully in projects requiring flying
indoors.
Once the drone exists the building, it is necessary
to download the data and compute the 3D models and
2D floorplans. The usual software workflow and tools
used for such task are described in section 3.4. The
time needed to obtain these products will directly
depend on the dimensions of the building(s) to
process, although current software is pretty fast and
able to deliver results in short times. However, it must
be noted that RGB-D cameras may deliver a coarse
point cloud immediately. Since it has not been
processed, the accuracy and precision of such data
will never be as good as that of the products obtained
after a post-processing step; nonetheless, and in very
critical situations, the availability of a first point cloud
in so short a time may prove vital for the CPE teams,
being possible for them to enter a building with first-
hand knowledge about it. When there are lives at a
stake, coarse point clouds may make the difference.
Then, the CPE teams, carrying the portable
positioning device, will proceed to enter the already
mapped buildings. Their position will be computed in
real-time by means of visual odometry plus IMU data.
GISTAM 2018 - 4th International Conference on Geographical Information Systems Theory, Applications and Management
278
Ideally, this position should be sent to the control
team outside the building so they can track the
position of the personnel. This, obviously, implies the
use of some kind of communications that will not be
described in this paper. Note that, depending on the
conditions indoors (dust, no lightning, smoke, etc.)
the portable device might be unable to compute
positions. Even in this case, the 3D models and 2D
floor maps in the hands of the control team are
invaluable tools: these may be used to guide the teams
inside the building using classic communication
channels - for example, telling them where to find
holes, collapsed ones, debris, or any other obstacles
that might interfere in their task.
3.4 About the Software
Although the portable positioning device and the
mapping payload share the same SOC and sensors,
the software used to manage it will be different
depending on how it is used.
When used as a portable positioning device, two
situations may be told apart: indoors and outdoors
positioning. The last of these two cases is well known
and usually solved by means of GNSS receivers; in
the concept presented here, IMU data will be used to
enhance position and attitude data. The well-known
extended Kalman filter - or sequential least squares
algorithm - is the engine to use to do so (Parés and
Colomina, 2015). Data will be referred to a global
reference frame. But as soon as the emergency team
enters a building, thus being indoors, the device will
change its mode of operation, thus using a different
algorithm. Since the GNSS signal will not be
available, the RGB-D cameras plus the IMU raw data
will be used instead. Again, an extended Kalman
filter will be used. The IMUs will play the main role
in the prediction step, producing orientation and
position data in a global reference frame. On the
contrary, and during the filter step, the orientation and
position data derived from the processing of the
RGB-D images (visual odometry) will be responsible
for updating the predicted ones, in order to control the
drift introduced by the inertial sensors. In this step,
data is referred to a local reference frame, but it is
possible to compute their equivalent in the global one
using the level arm and boresight matrices. Finally, a
common temporal reference frame is necessary to
correctly deal with data coming from these two
sources - the internal clock of the SOC will be enough
for the purposes of this concept.
When the device is used as a mapping payload,
it is targeted at the production of the 3D models and
2D floorplans. Once that the data (GNSS -outdoors-,
IMU - indoors + outdoors - and RGB-D images -
indoors) have been downloaded, three steps are
necessary to do it.
In first place, it is necessary to estimate the
(coarse) initial approximations of the positions and
attitude values related to the imagery. The algorithm
is the same used by the device when working as a
portable positioning device (see above). Secondly, a
block adjustment will take care of refining these
initial approximations so much better values are
obtained. Depending on the community, this step is
known as SfM or "Integrated Sensor Orientation".
This task may be done using any of the software
packages available nowadays, as Pix4D (Pix4D,
2017), AgiSoft (AgiSoft, 2017) or Micmac (Pinte,
2017). Finally, the depth data obtained from the
RGB-D camera together with the position and
orientation data just refined will be used to produce a
dense point cloud. A popular software library able to
do this is PCL (PCL, 2017).
4 FEASIBILITY
Going from a concept to an actual, working
implementation safely, implies a feasibility analysis.
Some factors that must be taking into account in such
analysis have a direct impact on the hardware use to
implement the positioning / mapping device.
Power consumption. Some RGB-cameras are
pretty greedy regarding power. Obviously, higher
power consumption implies less operating time and
the unsuitability of the camera for the purposes of this
work. This is the case of the Microsoft Kinect v2.
Extra hardware or software requirements. Some
cameras, including again the Kinect v2, require extra
hardware to work properly and to deliver the
performance needed to fulfil the pursued goals
(Chesa, 2017). Examples of these extra requirements
are GPUs, OpenGL, or USB 3.0 ports.
SOC performance. Several authors have tested the
suggested SOC, the NVIDIA Jetson TX2, showing
that it is capable enough to produce the necessary
output rate in terms of positions per second when
using the algorithms and techniques described in this
paper (Mur-Artal and Tardós, 2017; Forster, 2017).
Common temporal frame. The positioning /
mapping device collects data originating in different
sensors (the RGB-D camera and the GNSS + IMU
module). The algorithms used to produce positions or
maps out of these observations need to know the
precise time when these observations were generated,
and such time must be referred to a common temporal
frame for all the sensors involved in the process.
A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios
279
Systems having to match high-quality geometric
constraints normally use devoted hardware to provide
the aforementioned temporal reference frame. In the
case discussed here, the SOC's internal clock is
enough, specially (although not necessarily) if the
drifts and latencies existing in the system are properly
characterized.
Pre-heating time, calibration and RGB-D data
quality. An incorrect (or inexistent) geometric
calibration or too short pre-heating times may
produce systematic errors in the quality of the depth
measurements that have direct implications in that of
the derived point cloud (Chesa, 2017; Keselman,
2017; Lachat 2015). It is worth to note that the
geometric requirements in the context of this
application are so relaxed that the aforementioned
problems are negligible in this case.
Environmental operating conditions. Visual
odometry needs some minimum operating conditions
to work, especially those regarding illumination and
texture. The operating range as well as the quality of
the depth measurements directly depend on them. The
immediate consequence is that the quality of the
positioning / mapping solution, relying on the
extraction of features of the RGB-D imagery, is also
affected. (Mur-Artal and Tardós, 2017; Forster, 2017)
discuss how feasible is to operate in moderate
illumination conditions. Note, also, that the when
using active sensors (infrared emission) it is possible
to work in near 0-lux lightning conditions, although
the operating range is still shorter.
Mapping post-processing times. According to
(Remondino, 2017), the photogrammetric software
packages currently available are powerful enough as
to deliver results in times short enough as to meet the
requirements set by post-disaster, emergency
scenarios. Furthermore, the limited autonomy of
RPAS still reinforces the previous statement, since
the amount of data to process will be relatively small,
thanks to the capabilities of the aforementioned
software packages and current computers.
RGB-D cameras' extra cost. RGB-D cameras are
slightly more expensive than RGB ones but,
according to (Mur-Artal and Tardós, 2017; Fang,
2015) the extra features these incorporate are worth
of it. RGB-D cameras are still able to produce robust
solutions in poor or changing illumination conditions.
Finally, the point cloud these cameras provide may be
improved (see previous point above) by means of
post-processing techniques, delivering useable 3D
models in very short times.
5 CONCLUSIONS
This work was motivated by the risk that CPE teams
take whenever they enter damaged buildings in post-
disaster scenarios. The main goal was to check
whether a fast indoors mapping / positioning system,
relying on no pre-deployed infrastructures, and
delivering 3D models with a quality enough to fulfil
the needs of that teams, could be feasible using
currently available technologies and recent advances
in algorithms.
A thorough state of the art has been presented,
showing the most recent technology and algorithms;
a concept, relying heavily on that information, has
been detailed, explaining how such a system could be
built and exploited. A feasibility analysis,
highlighting the most relevant issues has also been
included.
This paper presents just a concept and not an
actual implementation of the system. Nonetheless, the
performances, features, algorithms and procedures
already explored and documented by many other
researchers have convinced the authors that such a
system is feasible nowadays. And that it is possible
using low-cost equipment, which would facilitate its
adoption by many civil protection agencies.
REFERENCES
Agisoft, 2017. Agisoft Photoscan. http://www.agisoft.com.
Accessed: 30 November 2017.
Boccardo, P., F. Chiabrando, F. Dutto, F. Giulio Tonolo,
and A. Lingua. 2015. UAV Deployment Exercise for
Mapping Purposes: Evaluation of Emergency Response
Applications. Sensors 15: 1571715737.
CEMS, 2015. Copernicus Emergency Management Service
Mapping. Manual of Operational Procedures
Guidelines for EC Services, Service Providers and
Authorized Users. European Commission DG GROW,
DG ECHO, DG JRC. Version 1.1 February 2015.
Chesa, M., 2017. Obstacle avoidance for an autonomous
Rover. Bachelor degree thesis Technical University of
Catalonia, 2017.
Colomina, I., and P. Molina. 2014. Unmanned aerial
systems for photogrammetry and remote sensing: A
review. ISPRS Journal of Photogrammetry and Remote
Sensing 92: 7997.
Dardari, D., Closas, P., Djuric, P, 2015. Indoor Tracking:
Theory, Methods, and Technologies. IEEE
Transactions on Vehicular Technology, 64(4), 2015,
1263-1278.
Dominici, D., Alicandro, M., Massimi, V., 2017. UAV
photogrammetry in the post-earthquake scenario: case
studies in L'Aquila. Geomatics, Natural Hazards and
Risk Vol. 8, Iss. 1, 2017.
GISTAM 2018 - 4th International Conference on Geographical Information Systems Theory, Applications and Management
280
Fang, Z.; Zhang, Y., 2015. Experimental Evaluation of
RGB-D Visual Odometry Methods. International
Journal of Advanced Robotics Systems 2015, 12, 116.
Flyability, 2017. ELIOS - Inspect & explore indoor and
confined spaces. http://www.flyability.com/elios.
Accessed: 30 November 2017.
Franklin, D., 2017. NVIDIA Jetson TX2 delivers twice the
intelligence to the edge. https://devblogs.nvidia.com/
parallelforall/jetson-tx2-delivers-twice-intelligence-
edge. Accessed: 30 November 2017.
Forster, C., Zhang, Z., Gassner, M., Werlberger, M.,
Scaramuzza, D., 2017. SVO: Semi-Direct Visual
Odometry for Monocular and Multi-Camera Systems.
IEEE Transactions on Robotics, Vol. 33, Issue 2, pages
249-265, Apr. 2017.
Giordan, D., Manconi, A., Remondino, F. and Nex, F.C.,
2017. Use of unmanned aerial vehicles in monitoring
application and management of natural hazards: open
access. Geomatics, Natural Hazards and Risk, 8
(2017)1 pp. 1-4.
Glira, P., Pfeifer, N., Mandlburger, G., 2016. Rigorous Strip
Adjustment of UAV-based Laserscanning Data
Including Time-Dependent Correction of Trajectory
Errors. Photogrammetric Engineering and Remote
Sensing, 82 (2016), 12; 945 954.
Gomez, C., Purdie, H., 2016. UAV- based Photogrammetry
and Geocomputing for Hazards and Disaster Risk
Monitoring A Review. Geoenvironmental Disasters,
2016, Volume 3, Number 1, Page 1.
Hartley, R. and Zisserman, A., 2000. Multiple view
geometry in computer vision. Cambridge University
Press, 2nd edition.
Hartley, R. and Schaffalitzky, F., 2004. Minimization in
geometric reconstruction problems. In: Proceedings of
Computer Vision and Pattern Recognition conference,
2004.
Keselman, L., Iselin Woodfill, J., Grunnet-Jepsen, A.,
Bhowmik, A., 2017. Intel© RealSenseTM stereoscopic
depth cameras. In: Proceedings of IEEE Conference on
computer vision and pattern recognition workshops
(CVPRW), 21-26 July 2017, Honolulu, HI (USA).
Kruijff, G.-J., Tretyakov, V., Linder, T., Pirri, F, Gianni,
M., Papadakis, P., Pizzoli, M., Sinha, A., Pianese, E.,
Corrao, S., Priori, F., Febrini, S. and Angeletti, S., 2012.
Rescue robots at earthquake-hit Mirandola, Italy: A
field report. In IEEE Intl. Symp. on Safety, Security, and
Rescue Robotics (SSRR), 2012.
Kruijff-Korbayová, I. 2016. The Use of Robots for Disaster
Response”. Presentation at CTIF Delegates Assembly
& Symposium, Sep 8-9 2016 in Helsinki.
Lachat, E., Macher, H., Landes, T., Grussenmeyer, P.,
2015. Assessment and calibration of a RGB-D Camera
(Kinect v2 sensor) towards a potential use for close-
range 3D modeling. Remote Sensing 7(10):13070-
13097.
Leutenegger, S., Chli, M., Siegwart, R., 2011. BRISK:
Binary Robust Invariant Scalable Keypoints. In:
Proceedings of the IEEE International Conference on
Computer Vision (ICCV), 2011.
Leutenegger, S., Furgale, P.T., Rabaud, V., Chli, M.,
Konolige, K. and Siegwart, R., 2013. Keyframe-Based
Visual-Inertial SLAM using Nonlinear Optimization,
In: Proceedings of Robotics: Science and Systems,
2013.
Lowe, D., 1999. Object recognition from local scale-
invariant features. In: Proceedings of the International
Conference on Computer Vision (ICCV), 2011.
Mur-Artal R., and Tardós, J.D, 2017. ORB-SLAM2: an
Open-Source SLAM System for Monocular, Stereo and
RGB-D Cameras. IEEE Transactions on Robotics, vol.
33, no. 5, pp. 1255-1262, June 2017.
Nex, F.C., and Remondino, F., 2014. UAV for 3D mapping
applications: a review. Applied Geomatics, 6 (2014)1
pp. 1-15.
Nister, D., 2003. An efficient solution to the five-point
relative pose problem. In CVPR03.
Parés, M.E., Colomina, I., 2015. On software Architecture
Concepts for a Unified, Generic and Extensible
Trajectory Determination System. In: Proceedings of
the ION GNSS+, 08-12 September 2015, Tampa,
Florida (USA).
PCL., 2017. Point Cloud Library (PCL).
http://pointclouds.org/. Accessed: 7 February 2018.
Pinte, A., 2017. Micmac, un logiciel pour la mise en
correspondance automatique dans le contexte
géographique. http://logiciels.ign.fr/?Micmac.
Accessed: 30 November 2017.
Pix4D, 2017. Pix4D - Drone photogrammetric software for
desktop + cloud + mobile. https://pix4d.com.
Accessed: 30 November 2017.
Remondino, F., Nocerino, E., Toschi, I., and Menna, F,
2017. A Critical Review of Automated
Photogrammetric processing of large datasets. In: Int.
Arch. Photogramm. Remote Sens. Spatial Inf. Sci.,
XLII-2/W5, 591-599, 2017.
Scaramuzza, D. and Fraundorfer, F., 2011. Visual
Odometry [Tutorial]. IEEE Robot. Automat. Mag.
18(4): 80 92.
Scaramuzza, D. et al., 2014. Vision-Controlled Micro
Flying Robots: From System Design to Autonomous
Navigation and Mapping in GPS-Denied
Environments. IEEE Robotics & Automation
Magazine, vol. 21, no. 3, pp. 26-40, Sept. 2014.
Taylor C.N, Veth M.J., Raquet J.F., Miller M.M, 2011.
Comparison of Two Image and Inertial Sensor Fusion
Techniques for Navigation in Unmapped
Environments. IEEE Transactions on Aerospace and
Electronic Systems, vol. 47, no. 2, pp. 946958.
ublox, 2017. NEO-M8U module. https://www.u-blox.com/
en/product/neo-m8u-module. Accessed: 30 November
2017.
Veth, M.J., 2011. Navigation Using Images, A survey of
Techniques. Journal of Navigation 58(2), pp. 127140.
A Concept for Fast Indoor Mapping and Positioning in Post-Disaster Scenarios
281