Authors:
Héctor Laria
1
;
2
;
Kai Wang
1
;
Joost van de Weijer
1
;
2
;
Bogdan Raducanu
1
;
2
and
Yaxing Wang
3
Affiliations:
1
Computer Vision Center, Barcelona, Spain
;
2
Universitat Autònoma de Barcelona, Spain
;
3
Nankai University, China
Keyword(s):
NeRF, Diffusion Models, 3D Generation, Multi-View Consistency, Face Generation.
Abstract:
Generating high-fidelity 3D-aware images without 3D supervision is a valuable capability in various applications. Current methods based on NeRF features, SDF information, or triplane features have limited variation after training. To address this, we propose a novel approach that combines pretrained models for shape and content generation. Our method leverages a pretrained Neural Radiance Field as a shape prior and a diffusion model for content generation. By conditioning the diffusion model with 3D features, we enhance its ability to generate novel views with 3D awareness. We introduce a consistency token shared between the NeRF module and the diffusion model to maintain 3D consistency during sampling. Moreover, our framework allows for text editing of 3D-aware image generation, enabling users to modify the style over 3D views while preserving semantic content. Our contributions include incorporating 3D awareness into a text-to-image model, addressing identity consistency in 3D view
synthesis, and enabling text editing of 3D-aware image generation. We provide detailed explanations, including the shape prior based on the NeRF model and the content generation process using the diffusion model. We also discuss challenges such as shape consistency and sampling saturation. Experimental results demonstrate the effectiveness and visual quality of our approach.
(More)