A Fusion Approach for Enhanced Remote Sensing Image Classification

Vian Ahmed, Khaled Jouini, Ouajdi Korbaa

2024

Abstract

Satellite imagery provides a unique and comprehensive view of the Earth’s surface, enabling global-scale land cover mapping and environmental monitoring. Despite substantial advancements, satellite imagery analysis remains a highly challenging task due to intrinsic and extrinsic factors, including data volume and variability, atmospheric conditions, sensor characteristics and complex land cover patterns. Early methods in remote sensing image classification leaned on human-engineered descriptors, typified by the widely used Scale-Invariant Feature Transform (SIFT). SIFT and similar approaches had inherent limitations in directly representing entire scenes, driving the use of encoding techniques like the Bag-of-Visual-Words (BoVW). While these encoding methods offer simplicity and efficiency, they are constrained in their representation capabilities. The rise of deep learning, fuelled by abundant data and computing power, revolutionized satellite image analysis, with Convolutional Neural Networks (CNNs) emerging as highly effective tools. Nevertheless, CNNs’ extensive need for annotated data limits their scope of application. In this work we investigate the fusion of two distinctive feature extraction methodologies, namely SIFT and CNN, within the framework of Support Vector Machines (SVM). This fusion approach seeks to harness the unique advantages of each feature extraction method while mitigating their individual limitations. SIFT excels at capturing local features critical for identifying specific image characteristics, whereas CNNs enrich representations with global context, spatial relationships and hierarchical features. The integration of SIFT and CNN features helps thus in enhancing resilience to perturbations and generalization across diverse landscapes. An additional advantage is the adaptability of this approach to scenarios with limited labelled data. Experiments on the EuroSAT dataset demonstrate that the proposed fusion approach outperforms SIFT-based and CNN-based models used separately and that it achieves either better or comparable results when compared to existing notable approaches in remote sensing image classification.

Download


Paper Citation


in Harvard Style

Ahmed V., Jouini K. and Korbaa O. (2024). A Fusion Approach for Enhanced Remote Sensing Image Classification. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8, SciTePress, pages 554-561. DOI: 10.5220/0012376600003660


in Bibtex Style

@conference{visapp24,
author={Vian Ahmed and Khaled Jouini and Ouajdi Korbaa},
title={A Fusion Approach for Enhanced Remote Sensing Image Classification},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={554-561},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012376600003660},
isbn={978-989-758-679-8},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - A Fusion Approach for Enhanced Remote Sensing Image Classification
SN - 978-989-758-679-8
AU - Ahmed V.
AU - Jouini K.
AU - Korbaa O.
PY - 2024
SP - 554
EP - 561
DO - 10.5220/0012376600003660
PB - SciTePress