Real-time On-board Detection of Components and Faults in an
Autonomous UAV System for Power Line Inspection
Naeem Ayoub
a
and Peter Schneider-Kamp
b
Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmmark
Keywords:
Power Lines Inspection, Fault Detection, Autonomous Drones Systems, Deep Learning.
Abstract:
The inspection of power line components is periodically conducted by specialized companies to identify pos-
sible faults and assess the state of the critical infrastructure. UAV-systems represent an emerging techno-
logical alternative in this field, with the promise of safer, more efficient, and less costly inspections. In the
Drones4Energy project, we work toward a vision-based beyond-visual-line-of-sight (BVLOS) power line in-
spection architecture for automatically and autonomously detecting components and faults in real-time on
board of the UAV. In this paper, we present the first step towards the vision system of this architecture. We
train Deep Neural Networks (DNNs) and tune them for reliability under different conditions such as variations
in camera used, lighting, angles, and background. For the purpose of real-time on-board implementation of
the architecture, experimental evaluations and comparisons are performed on different hardware such as Rasp-
berry Pi 4, Nvidia Jetson Nano, Nvidia Jetson TX2, and Nvidia Jetson AGX Xavier. The use of such Single
Board Devices (SBDs) is an integral part of the design of the proposed power line inspection architecture. Our
experimental results demonstrate that the proposed approach can be effective and efficient for fully-automatic
real-time on-board visual power line inspection.
1 INTRODUCTION
Visual inspections of power lines and the various
components of power pylons are essential to en-
sure the uninterrupted functioning of the power grid.
Most companies are using manual inspection methods
involving humans, helicopters or manually-piloted
UAVs. These types of inspection are rather expensive
and slow, with some of them even being outright dan-
gerous. To overcome these issues, research projects
at some companies as well as in the academic world
are focusing on the development of Artificial Intel-
ligence (AI)-based autonomous power lines inspec-
tion and fault detection methods. In recent years, a
lot of focus has been on inspection architectures that
partially automate the visual inspections by utilizing
drones or climbing robots (Jenssen et al., 2018). The
increase of computational capabilities of SBDs and
the extended capabilities of UAV technologies have
lead many researchers to focus and develop UAVs
based autonomous object detection systems. There
is a stream of research on vision-based applications
for UAVs (Al-Kaff et al., 2018), which provides the
a
https://orcid.org/0000-0002-7387-4441
b
https://orcid.org/0000-0003-4000-5570
potential to become a game changer in the inspection
of power lines.
Based on recent advances in drones technology,
in this paper, we address a number of challenges
regarding traditional power line inspection methods
and develop autonomous algorithms by utilizing Deep
Learning (DL) technology. We also create a medium-
sized dataset of different classes based on normal and
faulty components (Figure 1) for training the semi-
supervised classification models. During the develop-
ment of the proposed architecture, we identified that
there can be different factors that influence the inspec-
tion process: training data, the relatively small size of
components and faults, unidentified faults, and clut-
tered backgrounds in different lighting conditions.
2 BACKGROUND
In this section, we discuss different power line inspec-
tion method and DL-based object detection methods.
These provide the real-world as well as the academic
background for our proposed solution.
68
Ayoub, N. and Schneider-Kamp, P.
Real-time On-board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection.
DOI: 10.5220/0009826700680075
In Proceedings of the 1st International Conference on Deep Learning Theory and Applications (DeLTA 2020), pages 68-75
ISBN: 978-989-758-441-1
Copyright
c
2020 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
Figure 1: Sample images of normal and related most common faulty components classes used to train the DNNs model
(from top left to the right bottom): Insulator type 1, Insulator type 2, Insulator type 3, damaged insulator, normal Vibration
Dampener (VD), damaged VD, missing VD, normal Nutbolt, rusted Nutbolt, normal Toppads (Jenssen et al., 2018), missing
Toppads (Jenssen et al., 2018), Antena, broken Wire, Bird Nest, respectively.
2.1 Power Line Inspection Methods
Power lines are traditionally inspected at regular inter-
vals by different inspection methods such as human-
centered power lines inspection, semi-automated
power line inspection, and UAV-based power line in-
spection methods (Jenssen et al., 2018).
2.1.1 Human-centered Power Line Inspections
Human-centered power line inspection methods rely
on human involvement in the form of inspectors. In
these methods, the inspection of power lines is con-
ducted by foot patrols or by helicopter-assisted sur-
veys. When using the foot patrol method, a team typ-
ically consists of two or more inspectors travelling
along the power lines by foot and inspect the power
lines with the help of binoculars or infrared cameras.
Where a closer look is required, the power line is
shut down, and one of the inspectors climbs the power
tower and along the power line secured by a rope.
When using the helicopter-assisted inspection
method, a team of inspectors travel by helicopter
along power lines to take pictures of different com-
ponents of pylons. These images are further sent
to inspectors for offline inspection. The inspectors
then identify the faults such as rusty components,
birds nests, broken wires, faulty and broken insula-
tors, missing toppads, and other misformed or miss-
ing components.
These two inspection methods are human-
centered and are still widely used by inspection com-
panies in spite of a number of disadvantages such as
high cost, extensive time consumption, lack of safety
in harsh terrains, and even impossibility of applica-
tion in less than optimal weather conditions. The ac-
curacy is rather low due to many components being
hard to reach by humans or helicopters. The biggest
disadvantage of helicopter-based inspection in addi-
tion to very high costs is that flying close to power
lines poses a life-threatening security risk as contact
with the cables during survey usually has fatal conse-
quences (Takaya et al., 2019).
2.1.2 Semi-automated Power Line Inspections
Human-centered inspections are performed less fre-
quently because of their high monetary and time
costs. As an alternative to these manual inspections,
semi-automated inspection methods are slowly be-
ing used by a few first-moving inspection compa-
nies. These methods provide a moderate boost to
the speed of the inspection process, improve the ac-
curacy, and moderately reduce the inspection costs.
The most common techniques are semi-automated
helicopter-assisted and climbing-robots inspections.
Automated helicopter-assisted inspections differ from
manual helicopter-assisted inspections, because, in
this method, vision-based object detection techniques
are used after the collection of the inspection im-
ages and videos. For example, power mast detec-
tion can be applied for guiding cameras to automat-
ically film the conductors, pylons, power compo-
nents, and objects around the pylons and under the
power lines (B
¨
uhringer et al., 2010). Although, this
technique has reduced the dependency on human in-
Real-time On-board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection
69
spectors for visual observation and sped up the in-
spection process, inspection costs are still high, and
safety issues remain challenging. To overcome these
challenges, climbing robots inspection techniques has
been adopted by different companies. In this method,
climbing robots carrying many sensors and cameras
travel on power-lines for inspection. Zhou et al.
(Zhou et al., 2016) indicated that climbing robots
can lead to new challenges during inspection such as
damaging the power-wires while traveling, difficul-
ties while crossing obstacles on and around wires, and
comparatively large time costs compared to the auto-
mated helicopter-assisted inspection method.
2.1.3 UAV-based Power Line Inspections
UAV-based power line inspections are the most
promising inspection method. Developing the tech-
nology to be more robust is one of the main challenges
for many researchers and companies. In this method,
UAVs are equipped with multiple cameras and sen-
sors to travel along the power lines to inspect them
and detect faults. This technology has made a some
progress during the last few years because it has over-
come most of the inspection challenges such as the
cost of inspections as well as inspection safety and
speed.
2.2 DL Models for Object Detection
During the last two decades, many researchers have
proposed different ML (Machine Learning)- and DL
(Deep Learning)-based computer vision algorithms
by considering supervised, semi-supervised, and un-
supervised methods. DL-based object detection tech-
niques are very popular these days. These advances in
vision based techniques are encouraging the power in-
dustry to consider replacing the traditional inspection
methods and develop autonomous power lines moni-
toring systems based on the use UAVs. The main rea-
son of building the autonomous inspection systems is
its property of being able to detect a wide range of
components and faults on a single inspection (Zhou
et al., 2019b). Reviews of different data sources for
vision-based inspection and existing vision-based in-
spection systems can be found in (Contreras-Cruz
et al., 2019)
2.3 The Challenges Ahead
In this era of AI (Artificial Intelligence), as re-
searchers are making much progress in developing au-
tonomous systems, real-time autonomous power line
inspection using UAVs is still a big challenge. Many
researchers are developing Deep Learning (DL)-
based computer vision algorithms to automate the in-
spection process (Agnisarman et al., 2019). Some re-
searchers have applied these vision based method for
power line inspections (Zhou et al., 2019a) (Azevedo
et al., 2019) (Nguyen, 2019). However, the cur-
rent UAV-based power lines inspection techniques are
still facing many unsolved challenges (Jenssen et al.,
2019) (Gao et al., 2019). Alhassan et al. (Alhas-
san et al., 2020) presented a review and pointed at
the challenges during power line inspections. Dur-
ing the initial development phase of our architecture,
we identified five main challenges that need to be ad-
dressed for building UAV-based autonomous monitor-
ing systems:
Data collection and data analysis
Autonomous vision systems for UAVs to perform
real-time inspection
Suitable SBDs with sufficiently strong GPU to run
vision-based DL models for real-time on-board
inspection
Communication and mission control systems for
BVLOS UAV systems
Deep integration of path planning and control sys-
tems in a visual UAV-based inspection system
In this paper, we focus on the first three chal-
lenges, which are essential for designing and imple-
menting an autonomous visual real-time on-board in-
spection system. We will discuss the remaining two
challenges in future work. Note, though, that in par-
ticular the fifth point depends on the ability to run DL
models on-board and in real time to achieve a deep in-
tegration, e.g., by reusing component and fault detec-
tion results for adapting the path planning and the fur-
ther collection of images. To address first three chal-
lenges, we collected images data for different com-
ponents and faults. Then we trained DL models on
the collected and partially human-labelled images.
Experiments on different hardware during field test
phase shows that the proposed architecture can pro-
vide a solid base for building autonomous power lines
inspection systems. Figure 2 shows the flowchart of
our proposed architecture for autonomous power line
inspection.
The remainder of this paper is structured as fol-
lows: Section 3 presents relevant related work on dif-
ferent DL-based classification techniques for object
detection. Our proposed autonomous power line in-
spection architecture and experimental results are de-
scribed in Section 4. Finally, we conclude our results
and further work in Section 5.
DeLTA 2020 - 1st International Conference on Deep Learning Theory and Applications
70
Data collection
Labelling the
different classes for
feature extraction
GPU
Hardware
DNN
Model for network
training
Training
Phase
Detection
Phase
Software
Hardware
Single board CPU
for UAVs
Optimize the frozen
graph with
TensorRT
Freeze the weights
and network into
graph model
Network
configuration
Trained weights
Single board GPU
for UAVs
Figure 2: Proposed architecture for UAVs based on-board inspection and autonomous vision system.
3 RELATED WORK
3.1 DL-based Objects Classification and
Detection Models
During last few years, Convolutional Neural Net-
works (CNNs) have been used for different computer
vision techniques such as object detection (Zou et al.,
2019) as well as image classification and semantic
segmentation (Li et al., 2016).
The CNN model has been improved in a variety of
ways, and state-of-the-art object detection algorithms
based on CNNs are flourishing. As CNN models are
computationally rather expensive, it is not straight-
forward to use them in real-time image processing,
in particular, in situations where computational re-
sources are limited such as SBDs on board of UAVs.
In this section, we briefly discuss some CNN frame-
works developed for classification and detection of
objects relevant to the development paper.
R-CNN: To improve the computational efficiency,
Girshik et al. (Girshick et al., 2014) proposed the
R-CNN method for object detection. In this method,
region proposals obtained by selective search meth-
ods, and then features are extracted with a CNN. A
classifier based on support vector machines is used to
classify these features. Finally, patches are optimized
by bounding box regression. Figure 3 shows the flow
chart of an RCNN model (reproduced from (Girshick
et al., 2014).
Input image
Selective search
of
Region Proposals
Features
classification with
SVM
Bounding box
regression for
optimizing patches
Figure 3: Flow chart of R-CNN model.
ResNet: The training of CNNs remained a challeng-
ing in the above mentioned variants of CNNs. To
ease the training of neural networks, He et al. (He
et al., 2016) introduced the Deep Residual Network
(ResNet) model. In this method, connections between
the standard CNN layers allow the gradient signal to
travel back directly from later layers to early layers.
During the learning phase, the connection establish-
ment technique of ResNet layers allow the network
model to successfully train even with 152 layers. Fig-
ure 4 shows the residual learning model (reproduced
from (He et al., 2016)).
𝑥
𝐹#(𝑥) 𝑥
𝐹#
(
𝑥
)
+ 𝑥
ReLU
Weight layer
Weight layer
Identity
ReLU
Figure 4: Residual learning model.
R-FCN: Dai et al. (Dai et al., 2016) introduced a
new method based on Region-based Fully Convolu-
tional Networks (R-FCNs) to address existing issues
in R-CNN-based network architectures. As an alter-
native of applying region-level feature extraction, R-
FCN adopts a FCN (Fully Conovolutional Network)
architecture to share the computations across the im-
age. In this method, position-sensitive score maps
are obtained for classification and detection in a sin-
gle evaluation. R-FCN perform 2.5-20 times faster
and achieve higher accuracy than the Faster R-CNN.
Real-time On-board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection
71
Figure 5 show the architecture of R-FCN (reproduced
from (Dai et al., 2016)).
pool
Conv
Conv
Input image
RPN based
ROIs extraction
Features
maps
Region
of
interests
Vote
Per-ROI
Figure 5: Region-based Fully Convolutional Network
model.
YOLO: Redmon et al. (Redmon et al., 2016) pro-
posed a real-time object detection algorithm YOLO
(You Only Look Once). This model unifies region
classification proposals into a single neural network to
predict the bounding boxes and class probabilities. A
single image is divided into S × S grid cells and detec-
tion is performed into single evaluation. This unique
network structure makes YOLO much faster than the
aforementioned algorithms.
To improve accuracy, the YOLOv2 (Redmon and
Farhadi, 2016) model was proposed, in which fea-
tures such as direct location prediction, a high reso-
lution classifier, fine gradients, and dimension clus-
tering are added to the YOLO network. The authors
introduced batch normalization, direct location pre-
diction and replaced the fully connected layer with
a convolution layer to speed up the training and de-
tection process. In 2018, Redmon et al. introduced
YOLOv3 (Redmon and Farhadi, 2018) to further im-
prove the accuracy of YOLO and added more layers
and features to the network. In this paper, for build-
ing the autonomous vision part of the proposed UAV-
based inspection architecture, we have taken into con-
sideration both speed and accuracy. For these reasons,
we have used the DL model of YOLOv3 (Redmon
and Farhadi, 2018) for training the autonomous vision
system.
4 PROPOSED ARCHITECTURE
FOR REAL-TIME ON-BOARD
VISUAL INSPECTION
Our architecture for the autonomous vision system for
power line inspection is based on the three following
main components:
Collection and pre-analysis of a dataset.
Application of DL algorithms for training, testing,
and analysis of the dataset.
Selection of suitable SBDs for running the infer-
ence in real-time on board of the UAV.
In this section, we will discuss and summarize rel-
evant data and classes of components and faults for
training the autonomous detection algorithms. Then
we will discuss the DL model for autonomous inspec-
tion. We will summarize different SBDs and their
compatibility issues with respect to their application
for UAV-based real-time inspection. We will also
highlight the advantages of using SBDs with GPUs
embedded within UAVs.
4.1 Data Collection and Pre-analysis
Data collection and labeling the components of differ-
ent classes for training the DNN model is challenging
because there are no publicly available datasets. Af-
ter comprehensively reviewing different data sources
and structure of the components, we have built a cus-
tom dataset with the help of an inspection company
that is a collaborator in the Drones4Energy project.
While collecting the image dataset, we have consid-
ered different types of components and faults as sep-
arate classes according to their appearance in differ-
ent background, angles, lighting, and weather condi-
tions. Figure 1 shows different components of power
lines and their related faults reproduced from (Jenssen
et al., 2018). In our dataset, we have concentrated on
five relevant classes to keep the time consumption for
labelling by human experts at a reasonable level.
4.2 Suitable SBDs for UAV-based
Real-time On-board Inspections
Real-time on-board autonomous monitoring systems
for power lines consist of two main components: DL
algorithms and UAVs with embedded SBDs to run
the DL algorithms. Taking into consideration the re-
quired computational power, we have trained the DL
models using more powerful hardware than available
for the detection process as discussed in the follow-
ing.
Nvidia Tesla V100: Training DL models in rea-
sonable time requires rather powerful GPUs. We
have used an Nvidia Tesla V100 having 32GB RAM
through the CUDA 10 framework. In this GPU, each
streaming multiprocessor is partitioned into four pro-
cessing blocks. Each block consists of two Tensor
Cores, 16 FP32 cores, 8 FP64 cores, 16 INT32 cores
and one Special Function Unit (SFU). This GPU has
a total of 640 Tensor Cores, which jointly can accel-
erate the DL framework up to 125 TFLOPs (Markidis
et al., 2018).
We have used a variety of different SBDs to per-
form inference with the DL model in order to assess
and compare the real-time performance. Figure 6
DeLTA 2020 - 1st International Conference on Deep Learning Theory and Applications
72
(a) Raspberry Pi 4
(b) Nvidia Jetson Nano
(c) Nvidia Jetson TX2
(d) Nvidia Jetson AGX Xavier
(e) UAV with TX2 GPU
Figure 6: Single board hardware for UAVs to run real-time autonomous algorithm. From left to right, (a) Raspberry Pi 4,
(b) Nvidia Jetson Nano, (c) Nvidia Jetson TX2, (d) Nvidia AGX Xavier and (e) UAV with Jetson TX2.
shows the different SBDs, which have been consid-
ered as candidates for being embedded in UAVs.
Raspberry Pi 4: The Raspberry Pi 4 is cheapest op-
tion in terms of price among the SBDs considered.
This board is built with a 64-bit Broadcom Videocore
VI GPU and a quad-core Cortex-A72 (ARM v8) CPU
and has 4 GBs of RAM (see Figure 6(a)).
Nvidia Jetson Nano: The Jetson Nano is a small,
more powerful SBD developed by Nvidia (see Figure
6(b)). It features a Maxwell architecture-based GPU
and a quad-core ARM Cortex-A57 CPU and, as the
Raspberry Pi 4, has 4 GBs of RAM. The GPU comes
with a total of 128 Cuda cores, which can accelerate
the DL framework up to 0.5 TFLOPs.
Nvidia Jetson TX2: The Jetson TX2 is an even more
powerful SBD. Nvidia has equipped it with a more
modern Pascal architecture-based GPU (see Figure
6(c)). The Jetson TX2 has two CPUs: a dual-Core
NVIDIA Denver and a quad-core ARM Cortex-A57,
sharing 8 GBs of RAM. The GPU comes with a to-
tal of 256 Cuda cores, which can accelerate the DL
framework up to 1.3 TFLOPs.
Nvidia AGX Xavier: Nvidia’s flagship SBD, the
AGX Xavier, is one of the most powerful SBDs on the
market (see Figure 6(d)). It has a Volta architecture-
based GPU with a total of 512 cuda cores and 64 Ten-
sor cores. It also has an 8-core NVIDIA Carmel ARM
v8.2 64-bit CPU. The AGX Xavier can reach up to 32
TFLOPs and is specifically designed for running in-
ference on DL models in real-time environments. The
board can work in a number of different power modes,
which gives the user the possibility to select the num-
ber of working CPU cores and, thereby, to control the
power consumption of the SBD.
4.3 Autonomous DL Algorithm for
Real-time Inspection
For training the autonomous algorithm, we have used
the network model of YOLOv3 (darknet-53) pro-
posed by Redmon et al. (Redmon and Farhadi,
2018). We also train the YOLOv3-tiny (darknet-19)
model for performance comparisons. YOLOv3-tiny
has fewer convolutional layers than YOLOv3, which
improves its suitability for real-time processing but
reduces the accuracy somewhat. Concerning the pa-
rameters, we use the default configurations of both
YOLOv3 and YOLOv3-tiny. We set the momentum
of the stochastic gradient descent to 0.9 and the learn-
ing rate to 0.001. The weight decay is set to 0.005.
Concerning the scaling of the images for training of
the network, we set the images size to 608, 416 (stan-
dard), and 288. The batch size is set to 64 to im-
prove utilization of the GPU and its memory. During
the training, we ignore the anchors that drop below
the threshold value. As training needs a rather pow-
erful GPU, we have used the Nvidia Tesla V100 GPU
to train the DL models. To run the inference faster
on the SBDs, we accelerate the DL algorithm using
the TensorRT library. TensorRT is a DL library intro-
duced by Nvidia that optimizes the trained weights.
For optimization, the trained weights are frozen, and
then these frozen weights are optimized with Ten-
sorRT. The optimized weights run much faster than
non-optimized weights (compare Tables 1 and 2).
4.4 Experimental Results and
Discussion
For evaluation of the real-time image processing, we
have used different SBDs, which can be embedded as
part of UAVs. Tables 1, 2, and 3 show the experimen-
tal evaluation in terms of frames per second (FPS) us-
ing different hardware for running real-time inference
with the DL algorithms. Table 1 shows the FPS dur-
ing real-time processing with YOLOv3-tiny on dif-
ferent hardware. YOLOv3-tiny runs much faster than
the YOLOv3 model (compare Tables 2 and 3), but it
provides somewhat less accuracy during inspection.
The accuracy drops even more after weight optimiza-
tion. We can clearly see that the processing of the
algorithm with non-optimized weights is too slow for
real-world implementation (see Table 1), even when
only using YOLOv3-tiny. We could not run inference
Real-time On-board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection
73
Figure 7: Results of detecting different components of power line pylons.
with the full YOLOv3 model on all SBDs due to too
large memory demands.
Table 1: FPS on different scales of images during real-time
detection (YOLOv3-tiny without weight optimization).
DL model YOLOv3-tiny normal
Input size (288) (416) (608)
SBD
Raspberry Pi 4 3 1 0.2
Nvidia Jetson nano 3.4 1.2 0.5
Nvidia Jetson TX2 20 17 10
Nvidia AGX Xavier 30 21.6 14
Table 2: FPS for YOLOv3-tiny on different scales of images
during real-time detection with optimized weights.
DL model YOLOv3-tiny optimized
Input size (288) (416) (608)
SBD
Nvidia Jetson Nano 22 15 4.5
Nvidia Jetson TX2 25 19 12
Jetson AGX Xavier 50 32 22
Table 3: FPS for YOLOv3 on different scales of images
during real-time detection with optimized weights.
DL model YOLOv3 optimized
Input size (288) (416) (608)
SBD
Nvidia Jetson Nano 5.28 3 1.45
Nvidia Jetson TX2 11.4 6.4 3
Jetson AGX Xavier 24 17 11
After optimizing weights with the TensorRT li-
brary, the performance on the Nvidia Jetson TX2 and
the Jetson AGX Xavier improves significantly (see
Tables 2 and 3).
We know that a DL algorithm for an autonomous
vision system should have both properties: accuracy
and real-time suitability. Hence, during as a result
of our experimental evaluation, we choose the full
YOLOv3 model with an Nvidia Jetson AGX Xavier
for the real-world implementation of our vision sys-
tem in the inspection architecture. The Jetson AGX
Xavier can run the YOLOv3 algorithm up to 17 FPS
on images scaled to 416 pixels with a good accuracy.
Such a frame rate is acceptable for real-time process-
ing and allows the drone control software to react
to the results of the detection without undue delay.
The experiments are performed by setting all SBDs
to maximum performance mode (nvpmodel), i.e., all
CPU and GPU cores were enabled at full speed. Fig-
ure 7 shows examples of the detection results for dif-
ferent classes of components of power lines.
5 CONCLUSIONS
In this study, we have proposed and evaluated the
use of autonomous DL algorithms running in real-
time on board of UAVs with the purpose of visual
power line inspection. We have also compared the
results between different SBDs that can be embedded
in UAVs for running inference for powerful DL mod-
els. We have also compared real-time results between
YOLOv3-tiny (darknet-19) and YOLOv3 (darknet-
53) and investigated the real-world impact of weight
optimization using TensorRT.
We have given the theoretical description of differ-
ent algorithms and SBDs, and then practically imple-
mented each algorithm on these SBDs. We realized
that YOLOv3 performs at an acceptable level in terms
of accuracy and real-time processing on the Nvidia
Jetson AGX Xavier, which therefore will constitute
the SBD for the Drones4Energy project.
In the future, we will extend our work by consider-
ing the remaining two challenges of performing mul-
tiple tasks on a single UAV regarding path planning,
communication, and control as well as autonomously
DeLTA 2020 - 1st International Conference on Deep Learning Theory and Applications
74
sending and further processing the results of compo-
nent and fault detection through a cloud service.
ACKNOWLEDGEMENTS
The presented research was supported by the Inno-
vation Fund Denmark, Grand Solutions, under grant
agreement No. 8057-00038A Drones4Energy project
1
.
REFERENCES
Agnisarman, S., Lopes, S., Madathil, K. C., Piratla, K., and
Gramopadhye, A. (2019). A survey of automation-
enabled human-in-the-loop systems for infrastructure
visual inspection. Automation in Construction, 97:52
– 76.
Al-Kaff, A., Mart
´
ın, D., Garc
´
ıa, F., de la Escalera, A., and
Armingol, J. M. (2018). Survey of computer vision
algorithms and applications for unmanned aerial vehi-
cles. Expert Systems with Applications, 92:447 – 463.
Alhassan, A. B., Zhang, X., Shen, H., and Xu, H. (2020).
Power transmission line inspection robots: A review,
trends and challenges for future research. Interna-
tional Journal of Electrical Power & Energy Systems,
118:105862.
Azevedo, F., Dias, A., Almeida, J., Oliveira, A., Fer-
reira, A., Santos, T., Martins, A., and Silva, E.
(2019). Lidar-based real-time detection and modeling
of power lines for unmanned aerial vehicles. Sensors,
19(8):1812.
B
¨
uhringer, M., Berchtold, J., B
¨
uchel, M., Dold, C.,
B
¨
utikofer, M., Feuerstein, M., Fischer, W., Bermes,
C., and Siegwart, R. (2010). Cable-crawler–robot for
the inspection of high-voltage power lines that can
passively roll over mast tops. Industrial Robot: An
International Journal.
Contreras-Cruz, M. A., Ramirez-Paredes, J. P., Hernandez-
Belmonte, U. H., and Ayala-Ramirez, V. (2019).
Vision-based novelty detection using deep features
and evolved novelty filters for specific robotic explo-
ration and inspection tasks. Sensors, 19(13):2965.
Dai, J., Li, Y., He, K., and Sun, J. (2016). R-FCN: object de-
tection via region-based fully convolutional networks.
CoRR, abs/1605.06409.
Gao, Z., Yang, G., Li, E., Shen, T., Wang, Z., Tian, Y.,
Wang, H., and Liang, Z. (2019). Insulator segmenta-
tion for power line inspection based on modified con-
ditional generative adversarial network. Journal of
Sensors, 2019.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014).
Rich feature hierarchies for accurate object detec-
tion and semantic segmentation. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 580–587.
1
https://drones4energy.dk/
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In The IEEE Con-
ference on Computer Vision and Pattern Recognition
(CVPR).
Jenssen, R., Roverso, D., et al. (2018). Automatic au-
tonomous vision-based power line inspection: A re-
view of current status and the potential role of deep
learning. International Journal of Electrical Power &
Energy Systems, 99:107–120.
Jenssen, R., Roverso, D., et al. (2019). Intelligent monitor-
ing and inspection of power line components powered
by uavs and deep learning. IEEE Power and Energy
Technology Systems Journal, 6(1):11–21.
Li, Y., Hao, Z., and Lei, H. (2016). Survey of convolutional
neural network. Journal of Computer Applications,
36(9):2508–2515.
Markidis, S., Der Chien, S. W., Laure, E., Peng, I. B., and
Vetter, J. S. (2018). Nvidia tensor core programmabil-
ity, performance & precision. In 2018 IEEE Interna-
tional Parallel and Distributed Processing Symposium
Workshops (IPDPSW), pages 522–531. IEEE.
Nguyen, V. N. (2019). Advancing deep learning for auto-
matic autonomous vision-based power line inspection.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.
(2016). You only look once: Unified, real-time ob-
ject detection. In The IEEE Conference on Computer
Vision and Pattern Recognition (CVPR).
Redmon, J. and Farhadi, A. (2016). Yolo9000: Better,
faster, stronger. arXiv preprint arXiv:1612.08242.
Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental
improvement. CoRR, abs/1804.02767.
Takaya, K., Ohta, H., Kroumov, V., Shibayama, K., and
Nakamura, M. (2019). Development of uav system
for autonomous power line inspection. In 2019 23rd
International Conference on System Theory, Control
and Computing (ICSTCC), pages 762–767. IEEE.
Zhou, G., Yuan, J., Yen, I.-L., and Bastani, F. (2016).
Robust real-time uav based power line detection and
tracking. In 2016 IEEE International Conference on
Image Processing (ICIP), pages 744–748. IEEE.
Zhou, M., Li, K., Wang, J., Li, C., Teng, G., Ma, L., Wu, H.,
Li, W., Zhang, H., Chen, J., et al. (2019a). Automatic
extraction of power lines from uav lidar point clouds
using a novel spatial feature. ISPRS Annals of Pho-
togrammetry, Remote Sensing & Spatial Information
Sciences, 4.
Zhou, X., Fang, B., Qian, J., Xie, G., Deng, B., and Qian,
J. (2019b). Data driven faster r-cnn for transmission
line object detection. In Cyberspace Data and In-
telligence, and Cyber-Living, Syndrome, and Health,
pages 379–389. Springer.
Zou, Z., Shi, Z., Guo, Y., and Ye, J. (2019). Object detection
in 20 years: A survey. CoRR, abs/1905.05055.
Real-time On-board Detection of Components and Faults in an Autonomous UAV System for Power Line Inspection
75