loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Felix Stillger 1 ; 2 ; Frederik Hasecke 2 ; Lukas Hahn 2 and Tobias Meisen 1

Affiliations: 1 University of Wuppertal, Gaußstraße 20, Wuppertal, Germany ; 2 APTIV, Am Technologiepark 1, Wuppertal, Germany

Keyword(s): Diffusion Model, Self-Attention, Segmentation.

Abstract: High-quality annotated datasets are crucial for training semantic segmentation models, yet their manual creation and annotation are labor-intensive and costly. In this paper, we introduce a novel method for generating class-agnostic semantic segmentation masks by leveraging the self-attention maps of latent diffusion models, such as Stable Diffusion. Our approach is entirely learning-free and explores the potential of self-attention maps to produce semantically meaningful segmentation masks. Central to our method is the reduction of individual self-attention information to condense the essential features required for semantic distinction. We employ multiple instances of unsupervised k-means clustering to generate clusters, with increasing cluster counts leading to more specialized semantic abstraction. We evaluate our approach using state-of-the-art models such as Segment Anything (SAM) and Mask2Former, which are trained on extensive datasets of manually annotated masks. Our results, demonstrated on both synthetic and real-world images, show that our method generates high-resolution masks with adjustable granularity, relying solely on the intrinsic scene understanding of the latent diffusion model - without requiring any training or fine-tuning. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 216.73.216.157

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Stillger, F., Hasecke, F., Hahn, L., Meisen and T. (2025). Beyond Labels: Self-Attention-Driven Semantic Separation Using Principal Component Clustering in Latent Diffusion Models. In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP; ISBN 978-989-758-728-3; ISSN 2184-4321, SciTePress, pages 68-80. DOI: 10.5220/0013124500003912

@conference{visapp25,
author={Felix Stillger and Frederik Hasecke and Lukas Hahn and Tobias Meisen},
title={Beyond Labels: Self-Attention-Driven Semantic Separation Using Principal Component Clustering in Latent Diffusion Models},
booktitle={Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP},
year={2025},
pages={68-80},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013124500003912},
isbn={978-989-758-728-3},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP
TI - Beyond Labels: Self-Attention-Driven Semantic Separation Using Principal Component Clustering in Latent Diffusion Models
SN - 978-989-758-728-3
IS - 2184-4321
AU - Stillger, F.
AU - Hasecke, F.
AU - Hahn, L.
AU - Meisen, T.
PY - 2025
SP - 68
EP - 80
DO - 10.5220/0013124500003912
PB - SciTePress