Direct Reward Fine-Tuning on Poses for Single Image to 3D Human in the Wild

Seoul National Univeristy
ICLR 2026

We propose DrPose, a method to post-train a multi-view diffusion model for enhanced posture of reconstructed 3D humans in dynamic and acrobatic scenarios.

Abstract

Single-view 3D human reconstruction has achieved remarkable progress through the adoption of multi-view diffusion models, yet the recovered 3D humans often exhibit unnatural poses. This phenomenon becomes pronounced when reconstructing 3D humans with dynamic or challenging poses, which we attribute to the limited scale of available 3D human datasets with diverse poses. To address this limitation, we introduce DrPose, a Direct Reward fine-tuning algorithm on Poses, which enables post-training of a multi-view diffusion model on diverse poses without requiring expensive 3D human assets. DrPose trains a model using only human poses paired with single-view images, employing a direct reward fine-tuning to maximize PoseScore, which is our proposed differentiable reward that quantifies consistency between a generated multi-view latent image and a ground-truth human pose. This optimization is conducted on DrPose15K, a novel dataset that was constructed from an existing human motion dataset and a pose-conditioned video generative model. Constructed from abundant human pose sequence data, DrPose15K exhibits a broader pose distribution compared to existing 3D human datasets. We validate our approach through evaluation on conventional benchmark datasets, in-the-wild images, and a newly constructed benchmark, with a particular focus on assessing performance on challenging human poses. Our results demonstrate consistent qualitative and quantitative improvements across all benchmarks.

Method

Directional Weight Score

Overview of our 3D human reconstruction pipeline. In this pipeline, the multi-view normal and RGB images are generated from the input image using a image-to-multi-view (I2MV) diffusion model. Then these images are converted into 3D representation using explicit human carving. In this work, we propose post-training the I2MV diffusion model to achieve better alignment with accurate poses in dynamic and acrobatic scenarios.

Directional Weight Score

Overview of DrPose. Given a 3D human pose $\theta$ and input image $I$, the I2MV diffusion model $\epsilon_{\omega}$ is trained to minimize $\mathcal{L}_{\textrm{total}}=\mathcal{L}_{\textrm{reward}}+w_{\mathrm{KL}}\cdot \mathcal{L}_{\mathrm{KL}}$. Here, $\mathcal{L}_{\textrm{reward}}$ measures the distance between $\theta$ and the generated latent image $x_0$, while $\mathcal{L}_{\mathrm{KL}}$ computes the KL divergence between $\epsilon_w$ and the frozen initial I2MV diffusion model $\epsilon_{w_0}$.

DrPose15K

Construction Process

Additional examples for DrPose15K - Page 1

Comparison with Existing Datasets

DrPose15K Training Set Statistics

Examples

Examples of DrPose15K, our proposed synthetic dataset consisting of diverse extreme human poses paired with single-view images.

BibTeX

@article{do2025drpose,
          author = {Seunguk Do, Minwoo Huh, Joonghyuk Shin, Jaesik Park},
          title = {Direct Reward Fine-Tuning on Poses for Single Image to 3D Human in the Wild},
          journal = {arXiv preprint},
          year = {2025},
          }