The First Workshop on
Video Generative Models:
Benchmarks and Evaluation
CVPR 2026
Exploring Challenges and Opportunities in Evaluating and Benchmarking Video Generative Models
Data: TBD
Denver CO, USA
Overview
Recent progress in video generative models necessitates the development of robust evaluation methods to assess their instruction following, physical realism, human fidelity, and creativity. Existing metrics and current benchmarks are often limited, focusing primarily on semantic alignment while failing to capture crucial imperfections such as structural inconsistencies, unnatural motion, and weak temporal coherence that plague even state-of-the-art generators.
Therefore, the VGBE workshop aims to explore next-generation video evaluators that are fine-grained, physics plausible, and human-aligned, serving as comprehensive video assessment frameworks capable of reflecting real-world dynamics and identifying subtle imperfections across diverse video generation domains. The workshop further seeks to establish multi-dimensional, explainable, and physically grounded evaluation methodologies that enable reliable, standardized benchmarks, so as to facilitating the development of video generative models and supporting their practical deployment in real-world applications.
Call for Papers
We invite submissions on the following topics but are not limited to:
Novel Metrics and Evaluation Methods
Datasets and Benchmarks
Developing video generative applications in vertical domains
Keynote Speakers
Organizers
Workshop Schedule
Paper Submission
| Submissions Due | TBD |
| Author Notification | TBD |
| Camera-Ready Due | TBD |
Submission Guidelines
We welcome the following types of submissions: short papers (2-4 pages), and full papers (5-8 pages).
All submissions should follow the CVPR 2026 author guidelines.
Submission Portal: OpenReview