loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Xu Zheng 1 ; Tejo Chalasani 2 ; Koustav Ghosal 2 ; Sebastian Lutz 2 and Aljosa Smolic 2

Affiliations: 1 Accenture Labs, Dublin, Ireland, School of Computer Science and Statistics, Trinity College Dublin and Ireland ; 2 School of Computer Science and Statistics, Trinity College Dublin and Ireland

Keyword(s): Neural Style Transfer, Data Augmentation, Image Classification.

Related Ontology Subjects/Areas/Topics: Color and Texture Analyses ; Computer Vision, Visualization and Computer Graphics ; Features Extraction ; Image and Video Analysis ; Image Formation and Preprocessing ; Image Generation Pipeline: Algorithms and Techniques

Abstract: The success of training deep Convolutional Neural Networks (CNNs) heavily depends on a significant amount of labelled data. Recent research has found that neural style transfer algorithms can apply the artistic style of one image to another image without changing the latter’s high-level semantic content, which makes it feasible to employ neural style transfer as a data augmentation method to add more variation to the training dataset. The contribution of this paper is a thorough evaluation of the effectiveness of the neural style transfer as a data augmentation method for image classification tasks. We explore the state-of-the-art neural style transfer algorithms and apply them as a data augmentation method on Caltech 101 and Caltech 256 dataset, where we found around 2% improvement from 83% to 85% of the image classification accuracy with VGG16, compared with traditional data augmentation strategies. We also combine this new method with conventional data augmentation approaches to f urther improve the performance of image classification. This work shows the potential of neural style transfer in computer vision field, such as helping us to reduce the difficulty of collecting sufficient labelled data and improve the performance of generic image-based deep learning algorithms. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.238.135.30

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Zheng, X.; Chalasani, T.; Ghosal, K.; Lutz, S. and Smolic, A. (2019). STaDA: Style Transfer as Data Augmentation. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP; ISBN 978-989-758-354-4; ISSN 2184-4321, SciTePress, pages 107-114. DOI: 10.5220/0007353401070114

@conference{visapp19,
author={Xu Zheng. and Tejo Chalasani. and Koustav Ghosal. and Sebastian Lutz. and Aljosa Smolic.},
title={STaDA: Style Transfer as Data Augmentation},
booktitle={Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP},
year={2019},
pages={107-114},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0007353401070114},
isbn={978-989-758-354-4},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2019) - Volume 4: VISAPP
TI - STaDA: Style Transfer as Data Augmentation
SN - 978-989-758-354-4
IS - 2184-4321
AU - Zheng, X.
AU - Chalasani, T.
AU - Ghosal, K.
AU - Lutz, S.
AU - Smolic, A.
PY - 2019
SP - 107
EP - 114
DO - 10.5220/0007353401070114
PB - SciTePress