loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: G. J. Burghouts ; K. Schutte ; M. Kruithof ; W. Huizinga ; F. Ruis and H. Kuijf

Affiliation: TNO, Intelligent Imaging, The Netherlands

Keyword(s): Classifier Synthesis, Knowledge Representation, Semantics, Attributes, Class Descriptions, Large Language Models, Zero-Shot Learning, Few-Shot Learning.

Abstract: Various good methods have been proposed for either zero-shot or few-shot learning, but these are commonly unsuited for both; whereas in practice one often starts without labels and some might become available later. We propose a method that naturally ties zero- and few-shot learning together. We initiate a zero-shot model from prior knowledge about the classes, by recombining the weights from a classification head via a linear reconstruction that is sparse to avoid overfitting. Our mapping is an explicit transfer of knowledge from known to new classes, hence it can be inspected and visualized, which is impossible with recently popular implicit prompt learning strategies. Our mapping is used to construct a classifier for the new class, by adapting the neural weights of the classifiers for the known classes. Effectively we synthesize a new classifier. Our method is flexible: we show its efficacy for various knowledge representations and various neural networks (whereas prompt learning is limited to language-vision models). Our synthesized classifier can operate directly on test samples in a zero-shot fashion. We outperform CLIP especially for uncommon image classes, sometimes by margins up to 32%. Because the synthesized classifier consists of a tensor layer, it can be optimized further when a (few) labeled images become available. For few-shot learning, our synthesized classifier provides a kickstart. With one label per class, it outperforms strong baselines that require annotation of attributes or heavy pretraining (CLIP) by 8%, and increases accuracy by 39% relative to conventional classifier initialization. The code is available. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.216.94.152

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
J. Burghouts, G.; Schutte, K.; Kruithof, M.; Huizinga, W.; Ruis, F. and Kuijf, H. (2024). Synthesizing Classifiers from Prior Knowledge. In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-679-8; ISSN 2184-4321, SciTePress, pages 47-58. DOI: 10.5220/0012304300003660

@conference{visapp24,
author={G. {J. Burghouts}. and K. Schutte. and M. Kruithof. and W. Huizinga. and F. Ruis. and H. Kuijf.},
title={Synthesizing Classifiers from Prior Knowledge},
booktitle={Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2024},
pages={47-58},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0012304300003660},
isbn={978-989-758-679-8},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Synthesizing Classifiers from Prior Knowledge
SN - 978-989-758-679-8
IS - 2184-4321
AU - J. Burghouts, G.
AU - Schutte, K.
AU - Kruithof, M.
AU - Huizinga, W.
AU - Ruis, F.
AU - Kuijf, H.
PY - 2024
SP - 47
EP - 58
DO - 10.5220/0012304300003660
PB - SciTePress