SEMI-SUPERVISED DISTANCE METRIC LEARNING FOR VISUAL OBJECT CLASSIFICATION

Hakan Cevikalp, Roberto Paredes

2009

Abstract

This paper describes a semi-supervised distance metric learning algorithm which uses pairwise equivalence (similarity and dissimilarity) constraints to discover the desired groups within high-dimensional data. As opposed to the traditional full rank distance metric learning algorithms, the proposed method can learn nonsquare projection matrices that yield low rank distance metrics. This brings additional benefits such as visualization of data samples and reducing the storage cost, and it is more robust to overfitting since the number of estimated parameters is greatly reduced. Our method works in both the input and kernel induced-feature space, and the distance metric is found by a gradient descent procedure that involves an eigen-decomposition in each step. Experimental results on high-dimensional visual object classification problems show that the computed distance metric improves the performance of the subsequent clustering algorithm.

References

  1. Bar-Hillel, A., Hertz, T., Shental, N., and Weinshall, D. (2003). Learning distance functions using equivalence relations. In International Conference on Machine Learning.
  2. Basu, S., Banerjee, A., and Mooney, R. J. (2004). Active semi-supervision for pairwise constrained clustering. In the SIAM International Conference on Data Mining.
  3. Bilenko, M., Basu, S., and Mooney, R. J. (2004). Integrating constraints and metric learning in semi-supervised clustering. In the 21st International Conference on Machine Learning.
  4. Cevikalp, H., Verbeek, J., Jurie, F., and Klaser, A. (2008). Semi-supervised dimensionality reduction using pairwise equivalence constraints. In Computer Vision Theory and Applications.
  5. Chen, H. T., Liu, T. L., and Fuh, C. S. (2005). Learning effective image metrics from few pairwise examples. In IEEE International Conference on Computer Vision.
  6. Globerson, A. and Roweis, S. (2005). Metric learning by collapsing classes. In Advances in Neural Information Processing Systems.
  7. Goldberger, J., Roweis, S., Hinton, G., and Salakhutdinov, R. (2004). Neighbourhood component analysis. In Advances in Neural Information Processing Systems.
  8. Hadsell, R., Chopra, S., and LeCun, Y. (2006). Dimensionality reduction by learning and invariant mapping. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  9. He, X. and Niyogi, P. (2003). Locality preserving directions. In Advances in Neural Information Processing Systems.
  10. Hertz, T., Shental, N., Bar-Hillel, A., and Weinshall, D. (2003). Enhancing image and video retrieval: Learning via equivalence constraints. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  11. Kwok, J. T. and Tsang, I. W. (2003). Learning with idealized kernels. In International Conference on Machine Learning.
  12. Lazebnik, S., Schmid, C., and Ponce, J. (2005). A maximum entropy framework for part-based texture and objcect recognition. In International Conference on Computer Vision (ICCV).
  13. Scholkopf, B., Smola, A. J., and Muller, K. R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299-1319.
  14. Shalev-Shwartz, S., Singer, Y., and Ng, A. Y. (2004). Online and batch learning of pseudo metrics. In International Conference on Machine Learning.
  15. Shi, J. and Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions on PAMI, 22:885- 905.
  16. Torresani, L. and Lee, K. C. (2006). Large margin component analysis. In Advances in Neural Information Processing Systems.
  17. Turk, M. and Pentland, A. P. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3:71-86.
  18. van de Weijer, J. and Schmid, C. (2006). Coloring local feature extraction. In European Conference on Computer Vision (ECCV).
  19. Villegas, M. and Paredes, R. (2008). Simultaneous learning of a discriminative projection and prototype for nearest-neighbor classification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  20. Weinberger, K. Q., Blitzer, J., and Saul, L. K. (2005). Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems.
  21. Xing, E. P., Ng, A. Y., Jordan, M. I., and Russell, S. (2003). Distance metric learning with application to clustering with side-information. In Advances in Neural Information Processing Systems.
Download


Paper Citation


in Harvard Style

Cevikalp H. and Paredes R. (2009). SEMI-SUPERVISED DISTANCE METRIC LEARNING FOR VISUAL OBJECT CLASSIFICATION . In Proceedings of the Fourth International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2009) ISBN 978-989-8111-69-2, pages 315-322. DOI: 10.5220/0001768903150322


in Bibtex Style

@conference{visapp09,
author={Hakan Cevikalp and Roberto Paredes},
title={SEMI-SUPERVISED DISTANCE METRIC LEARNING FOR VISUAL OBJECT CLASSIFICATION},
booktitle={Proceedings of the Fourth International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2009)},
year={2009},
pages={315-322},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0001768903150322},
isbn={978-989-8111-69-2},
}


in EndNote Style

TY - CONF
JO - Proceedings of the Fourth International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2009)
TI - SEMI-SUPERVISED DISTANCE METRIC LEARNING FOR VISUAL OBJECT CLASSIFICATION
SN - 978-989-8111-69-2
AU - Cevikalp H.
AU - Paredes R.
PY - 2009
SP - 315
EP - 322
DO - 10.5220/0001768903150322