Color-based and Rotation Invariant Self-similarities

Xiaohu Song, Damien Muselet, Alain Tremeau

2017

Abstract

One big challenge in computer vision is to extract robust and discriminative local descriptors. For many applications such as object tracking, image classification or image matching, there exist appearance-based descriptors such as SIFT or learned CNN-features that provide very good results. But for some other applications such as multimodal image comparison (infra-red versus color, color versus depth, ...) these descriptors failed and people resort to using the spatial distribution of self-similarities. The idea is to inform about the similarities between local regions in an image rather than the appearances of these regions at the pixel level. Nevertheless, the classical self-similarities are not invariant to rotation in the image space, so that two rotated versions of a local patch are not considered as similar and we think that many discriminative information is lost because of this weakness. In this paper, we present a method to extract rotation-invariant self similarities. In this aim, we propose to compare color descriptors of the local regions rather than the local regions themselves. Furthermore, since this comparison informs us about the relative orientations of the two local regions, we incorporate this information in the final image descriptor in order to increase the discriminative power of the system. We show that the self similarities extracted by this way are very discriminative.

References

  1. Chatfield, K., Philbin, J., and Zisserman, A. (2009). Efficient retrieval of deformable shape classes using local self-similarities. In NORDIA workshop in conjunction with ICCV.
  2. Chih-Yuan, Y., Jia-Bin, H., and Ming-Hsuan, Y. (2011). Exploiting self-similarities for single frame superresolution. In Proceedings of the 10th Asian Conference on Computer Vision - Volume Part III, pages 497-510, Berlin, Heidelberg. Springer-Verlag.
  3. Deselaers, T. and Ferrari, V. (2010). Global and efficient self-similarity for object classification and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, DC, USA. IEEE Computer Society.
  4. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/ index.html.
  5. Glasner, D., Bagon, S., and Irani, M. (2009). resolution from a single image. In ICCV.
  6. Kim, S., Min, D., Ham, B., Ryu, S., Do, M. N., and Sohn, K. (2015). Dasc: Dense adaptive self-correlation descriptor for multi-modal and multi-spectral correspondence. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2103-2112.
  7. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C., Bottou, L., and Weinberger, K., editors, Advances in Neural Information Processing Systems 25, pages 1097-1105. Curran Associates, Inc.
  8. Lowe, D. G. (1999). Object recognition from local scaleinvariant features. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), volume 2, pages 1150-1157 vol.2. IEEE Computer Society.
  9. Shechtman, E. and Irani, M. (2007). Matching local selfsimilarities across images and videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  10. Song, X., Muselet, D., and Tremeau, A. (2009). Local color descriptor for object recognition across illumination changes. In ACIVS09, pages 598-605, Bordeaux (France).
  11. van de Weijer, J. and Schmid, C. (2006). Coloring local feature extraction. In Proceedings of the European Conference on Computer Vision (ECCV), volume 3952 of Lecture Notes in Computer Science, pages 334-348.
  12. Wang, J., Lu, K., Pan, D., He, N., and kun Bao, B. (2014). Robust object removal with an exemplar-based image inpainting approach. Neurocomputing, 123:150 - 155.
  13. Wang, Z., Yang, Y., Wang, Z., Chang, S., Han, W., Yang, J., and Huang, T. S. (2015). Self-tuned deep super resolution. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
  14. Zontak, M. and Irani, M. (2011). Internal statistics of a single natural image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Download


Paper Citation


in Harvard Style

Song X., Muselet D. and Tremeau A. (2017). Color-based and Rotation Invariant Self-similarities . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-225-7, pages 344-351. DOI: 10.5220/0006107503440351


in Bibtex Style

@conference{visapp17,
author={Xiaohu Song and Damien Muselet and Alain Tremeau},
title={Color-based and Rotation Invariant Self-similarities},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={344-351},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006107503440351},
isbn={978-989-758-225-7},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2017)
TI - Color-based and Rotation Invariant Self-similarities
SN - 978-989-758-225-7
AU - Song X.
AU - Muselet D.
AU - Tremeau A.
PY - 2017
SP - 344
EP - 351
DO - 10.5220/0006107503440351