CVPR 2025

Workshop on
Photorealistic 3D Head Avatars
(P3HA)


June 11, 1pm - 6pm, 2025

Music City Center, Nashville TN, Room 110 A

Workshop Program

The Photo-realistic 3D Head Avatars workshop takes place on the afternoon of June 11th in room 110 A.

01:20

Opening Remarks01:20 PM - 01:30 PM

Organizers

Opening Remarks

Organizers

01:30

Invited Talk 101:30 PM - 02:00 PM

Shunsuke Saito

Scaling Codec Avatars

Shunsuke Saito
Bio

Shunsuke Saito is a Research Scientist at Meta Reality Labs Research in Pittsburgh, where he leads the effort on next generation digital humans. He obtained his PhD degree at the University of Southern California. Prior to USC, he was a Visiting Researcher at University of Pennsylvania in 2014. He obtained his BE (2013), ME (2014) in Applied Physics at Waseda University. His research lies in the intersection of computer graphics, computer vision and machine learning, especially centered around digital human, 3D reconstruction, and performance capture. His work has been published in SIGGRAPH, SIGGRAPH Asia, NeurIPS, ECCV, ICCV and CVPR, three of which have been nominated for CVPR Best Paper Award (2019, 2021) and ECCV Best Paper Award (2024). His real-time volumetric teleportation work also won Best in Show award in SIGGRAPH 2020 Real-time Live!

02:00

Invited Talk 202:00 PM - 02:30 PM

Paul Debevec

Making Avatars Relightable

Paul Debevec
Bio

Paul Debevec is the chief research officer at Eyeline Studios and adjunct research professor at the Viterbi School of Engineering at the University of Southern California. He obtained is PhD degree at UC Berkeley, which laid the foundation for an Acadamy Award winning VFX technique. At USC ICT Debevec has led the development of several Light Stage systems that capture and simulate how people and objects appear under real-world illumination, a technique that found countless applications in Hollywood movies. He received ACM SIGGRAPH's first Significant New Researcher Award in 2001 and in 2005 he received a Gilbreth Lectureship from the National Academy of Engineering. At Eyeline Studios Debevec continues to push the boundaries of VFX and relightable virtual avatars.

02:30

Benchmark Introduction02:30 PM - 02:40 PM

Organizers

Benchmark Introduction

Organizers

02:40

Winner Talk 102:40 PM - 02:55 PM

Zhiwen Chen

TaoAvatar

Zhiwen Chen

02:55

Winner Talk 202:55 PM - 03:10 PM

Linzhou Li

RGBAvatar

Linzhou Li

03:10

Coffee Break03:10 PM - 03:30 PM

None

Coffee Break

03:30

Invited Talk 303:30 PM - 04:00 PM

Shalini De Mello

Avatars for Democratized Telepresence

Shalini De Mello
Bio

Shalini De Mello is a Director of Research, New Experiences and a Distinguished Research Scientist at NVIDIA, where she leads the AI-Mediated Reality and Interaction Research Group. Prior to this, she was a Distinguished Research Scientist in the Learning and Perception Research Group at NVIDIA from 2013 - 2023. Her research interests are in AI, computer vision, computer graphics and digital humans. Her research focuses on using AI to re-imagine interactions between humans, and between humans and machines. She has co-authored scores of peer-reviewed publications and patents, and serves on the program committees of all major AI conferences. Her inventions have been incorporated into several NVIDIA AI products, including DriveIX, Maxine and the TAO Toolkit. She received her Doctoral and Master’s degrees in Electrical and Computer Engineering from the University of Texas at Austin.

04:00

Invited Talk 404:00 PM - 04:30 PM

Yao Feng

Toward Scalable Capture and Applications of Head Avatars

Yao Feng
Bio

Yao Feng is currently a Postdoc at Stanford University, working with Karen Liu, Jennifer Hicks, and Scott L. Delp. She received her Ph.D. from ETH Zürich and the Max Planck Institute for Intelligent Systems, under the supervision of Michael J. Black and Marc Pollefeys. She also spent a year as a research scientist at Meshcapade. Her research focuses on large-scale capture and understanding of digital humans, with applications spanning computer vision, computer graphics, biomechanics, and robotics. She has received several recognitions, including an Honorable Mention for the Eurographics PhD Award, and was named an EECS Rising Star and a WiGRAPH Rising Star in Computer Graphics.

04:30

Invited Talk 504:30 PM - 05:00 PM

Stefanos Zafeiriou

Generative Models for Digital Humans

Stefanos Zafeiriou
Bio

Stefanos Zafeiriou is a Professor of Machine Learning and Computer Vision with the Department of Computing, Imperial College London. From 2016 to 2020, he was a Distinguishing Research Fellow with the University of Oulu, Finland, under Finish Distinguishing Professor Program. Prof. Zafeiriou is an EPSRC Early Career Research Fellow. He was a recipient of the Prestigious Junior Research Fellowships from Imperial College London in 2011, the President’s Medal for Excellence in Research Supervision for 2016, the President’s Medal for Entrepreneurship in 2022, the Google Faculty Research Awards, and the Amazon Web Services (AWS) Machine Learning (ML) Research Award.

05:00

Invited Talk 605:00 PM - 05:30 PM

Hao Li

Click. Create. Accessible Avatars in 2D and 3D

Hao Li
Bio

Hao Li is the CEO & Co-Founder of Pinscreen, a Los Angeles-based startup that builds the world’s most advanced generative AI technology for AI VFX and Visual Dubbing, as well as a Professor of Computer Vision (tenured) at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), in Abu Dhabi (UAE), and Director of the MBZUAI Metaverse Center.

05:30

Panel Discussion05:30 PM - 06:00 PM

None

Panel Discussion

Workshop Challenges

The workshop holds a competition on the newly introduced NeRSemble benchmark for 3D Head avatars. The goal is to find the current best method for dynamic novel view synthesis on heads and monocular FLAME-driven avatar reconstruction.

Dynamic Novel View Synthesis Challenge

Given synchronized multi-view videos from 13 cameras, the task is to replay the same facial action performance from the 3 hold-out viewpoints. This requires reconstructing a 4D representation that plausibly models both geometry and complex motion. The challenge is conducted on 5 short sequences from different individuals. The sequences cover complex dynamic effects such as topological changes when the tongue sticks out, flying hair, dynamic wrinkle changes, as well as light refraction and reflection at glasses. To capture even subtle movements, the sequences have been recorded at 73fps.

Monocular FLAME Avatar Challenge

Given several frontal videos of a person's head with corresponding tracked meshes from FLAME, the task is to re-animate the person with unseen FLAME expression codes and then render from both seen (blue) and unseen (orange) camera viewpoints. This requires reconstructing an animatable 3D head representation (=3D head avatar). The challenge is conducted on recordings from 5 different individuals. For each individual, 18 short facial performance sequences are provided for training while the remaining 4 sequences are hold-out. For the hold-out sequences, only the tracked FLAME meshes and the camera poses are known.

Competition Prizes

The winner of each workshop challenge will receive:
  • a dedicated 15-minute oral presentation in the workshop to showcase your method
  • an RTX 5080 GPU sponsored by NVIDIA*

*cannot be gifted to non-academics or persons residing outside of North America and Europe due to export restrictions imposed on NVIDIA by the US government

Competition Timeline

Date
Challenge begin 17th March 2025
Challenge submission deadline 23rd May 2025
28th May 2025
Winner announcement 30th May 2025

Workshop Organizers

Tobias Kirschstein Tobias Kirschstein Technical University of Munich
Simon Giebenhain Simon Giebenhain Technical University of Munich
Tianye Li Tianye Li NVIDIA
Koki Nagano Koki Nagano NVIDIA
Justus Thies Justus Thies Technical University of Darmstadt
Matthias Nießner Matthias Nießner Technical University of Munich

Workshop Sponsors

NVIDIA We thank NVIDIA for sponsoring the prices for the workshop challenge winners.

Please contact Tobias Kirschstein for questions.