UP2You reconstructs high-quality textured meshes from unconstrained photos. Our approach effectively handles extremely unconstrained photo collections by rectifying them into orthogonal multi-view images and corresponding normal maps, enabling the reconstruction of detailed 3D clothed portraits.
Abstract
We present UP2You, the first tuning-free solution for reconstructing high-fidelity 3D clothed portraits from extremely unconstrained in-the-wild 2D photos. Unlike previous approaches that require "clean" inputs (e.g., full-body images with minimal occlusions, or well-calibrated cross-view captures), UP2You directly processes raw, unstructured photographs, which may vary significantly in pose, viewpoint, cropping, and occlusion. Instead of compressing data into tokens for slow online text-to-3D optimization, we introduce a data rectifier paradigm that efficiently converts unconstrained inputs into clean, orthogonal multi-view images in a single forward pass within seconds, simplifying the 3D reconstruction. Central to UP2You is a pose-correlated feature aggregation module (PCFA), that selectively fuses information from multiple reference images w.r.t. target poses, enabling better identity preservation and nearly constant memory footprint, with more observations. We also introduce a perceiver-based multi-reference shape predictor, removing the need for pre-captured body templates. Extensive experiments on 4D-Dress, PuzzleIOI, and in-the-wild captures demonstrate that UP2You consistently surpasses previous methods in both geometric accuracy (Chamfer-15%↓, P2S-18%↓ on PuzzleIOI) and texture fidelity (PSNR-21%↑, LPIPS-46%↓ on 4D-Dress). UP2You is efficient (1.5 minutes per person), and versatile (supports arbitrary pose control, and training-free multi-garment 3D virtual try-on), making it practical for real-world scenarios where humans are casually captured. Both models and code will be released to facilitate future research on this underexplored task.
Paradigm Differences Between Previous Works and UP2You
Top: Previous works like PuzzleAvatar and AvatarBooth compress unconstrained photos into implicit personal tokens and DreamBooth weights through fine-tuning, then generate 3D humans via SDS optimization.
Bottom: UP2You directly rectifies unconstrained photo collections into orthogonal view images and normals, then reconstructs textured human meshes, achieving superior quality while reducing processing time from 4 hours to 1.5 minutes.
Our Results
Pose-Dependent Correlation Maps
Related Work
- PuzzleAvatar: Assembly of Avatar from Unconstrained Photo Collections
- AvatarBooth: High-Quality and Customizable 3D Human Avatar Generation
- MV-Adapter: Multi-view Consistent Image Generation Made Easy
- PSHuman: Photorealistic Single-image 3D Human Reconstruction using Cross-Scale Multiview Diffusion
- 4D-DRESS: A 4D Dataset of Real-world Human Clothing with Semantic Annotations
- Function4D: Real-time Human Volumetric Capture from Very Sparse RGBD Sensors
- Human4DiT: 360-degree Human Video Generation with 4D Diffusion Transformer
- Learning Locally Editable Virtual Humans
- High-fidelity 3D Human Digitization from Single 2K Resolution Images
Citation
@article{cai2025up2you,
title={UP2You: Fast Reconstruction of Yourself from Unconstrained Photo Collections},
author={Cai, Zeyu and Li, Ziyang and Li, Xiaoben and Li, Boqian and Wang, Zeyu and Zhang, Zhenyu and Xiu, Yuliang},
journal={arXiv preprint arXiv:2509.24817},
year={2025}
}