loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Manpreet Singh Minhas and John Zelek

Affiliation: Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada

Keyword(s): Defect Detection, CNNs, Transfer Learning, Deep Learning.

Abstract: Visual defect assessment is an important task for infrastructure asset monitoring to detect faults (e.g., road distresses, bridge cracks, etc) for recognizing and tracking the distress. This is essential to make a decision on the best course of action, whether that be a minor or major repair or the status quo. Typically a lot of this surveillance and annotation is done by human operators. Until now, visual defect assessment has been carried out manually because of the challenging nature of the task. However, the manual inspection method has several drawbacks, such as training time and cost, human bias and subjectivity, among others. As a result, automation in visual defect detection has attracted a lot of attention. Deep learning approaches are encouraging the automation of this detection activity. The actual perceptual surveillance can be conducted with camera-equipped land vehicles or drones. The automatic defect detection task can be formulated as the problem of anomaly detection in which samples that deviate from the normal or defect-free ones need to be identified. Recently, Convolutional Neural Networks (CNNs) have shown tremendous potential in image-related tasks and have outperformed the traditional hand-crafted feature-based methods. But, CNNs require a large number of labelled data, which is virtually unavailable for all the practical applications and is a major drawback. This paper proposes the application of network-based transfer learning using CNNs for the task of visual defect detection that overcomes the challenge of training from a limited number of samples. Results obtained show that the proposed method achieves high performance from limited data samples with average F1 score and AUROC values of 0.8914 and 0.9766 respectively. The number of training defect samples were as low as 20 images for the Fray category of the Magnetic Tile defect data-set. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.116.62.45

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Minhas, M. and Zelek, J. (2020). Defect Detection using Deep Learning from Minimal Annotations. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP; ISBN 978-989-758-402-2; ISSN 2184-4321, SciTePress, pages 506-513. DOI: 10.5220/0009168005060513

@conference{visapp20,
author={Manpreet Singh Minhas. and John Zelek.},
title={Defect Detection using Deep Learning from Minimal Annotations},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP},
year={2020},
pages={506-513},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009168005060513},
isbn={978-989-758-402-2},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP
TI - Defect Detection using Deep Learning from Minimal Annotations
SN - 978-989-758-402-2
IS - 2184-4321
AU - Minhas, M.
AU - Zelek, J.
PY - 2020
SP - 506
EP - 513
DO - 10.5220/0009168005060513
PB - SciTePress