The First Workshop on

Video Generative Models:
Benchmarks and Evaluation

CVPR 2026

Exploring Challenges and Opportunities in Evaluating and Benchmarking Video Generative Models

Data: TBD

Denver CO, USA

Overview

Recent progress in video generative models necessitates the development of robust evaluation methods to assess their instruction following, physical realism, human fidelity, and creativity. Existing metrics and current benchmarks are often limited, focusing primarily on semantic alignment while failing to capture crucial imperfections such as structural inconsistencies, unnatural motion, and weak temporal coherence that plague even state-of-the-art generators.

Therefore, the VGBE workshop aims to explore next-generation video evaluators that are fine-grained, physics plausible, and human-aligned, serving as comprehensive video assessment frameworks capable of reflecting real-world dynamics and identifying subtle imperfections across diverse video generation domains. The workshop further seeks to establish multi-dimensional, explainable, and physically grounded evaluation methodologies that enable reliable, standardized benchmarks, so as to facilitating the development of video generative models and supporting their practical deployment in real-world applications.

Call for Papers

We invite submissions on the following topics but are not limited to:


Novel Metrics and Evaluation Methods

Datasets and Benchmarks

Developing video generative applications in vertical domains

Keynote Speakers

Alan Bovik

Alan Bovik

University of Colorado Boulder

Professor

Ming-Hsuan Yang

Ming-Hsuan Yang

UC Merced & Google DeepMind

Professor,
Research Scientist

Jiajun Wu

Jiajun Wu

Stanford University

Assistant Professor

Wenhu Chen

Wenhu Chen

University of Waterloo
Vector Institute

Assistant Professor

Mike Zheng Shou

Mike Zheng Shou

National University of Singapore

Assistant Professor

Balu Adsumilli

Balu Adsumilli

YouTube/Google

Research Scientist,
Team Lead

Yan Wang

Yan Wang

NVIDIA Research

Research Scientist,
Tech Lead

Zhuang Liu

Zhuang Liu

Princeton University

Assistant Professor

Organizers

Shuo Xing

Shuo Xing

Texas A&M University

Mingyang Wu

Mingyang Wu

Texas A&M University

Siyuan Yang

Siyuan Yang

Texas A&M University

Shuangyu Xie

Shuangyu Xie

UC Berkeley

Kaiyuan Chen

Kaiyuan Chen

UC Berkeley

Chris Wei Zhou

Chris Wei Zhou

Cardiff University

Sicong Jiang

Sicong Jiang

Abaka AI

Zihan Wang

Zihan Wang

2077AI Research Foundation, Abaka AI

Jian Wang

Jian Wang

Snap Research

Lin Wang

Lin Wang

Nanyang Technological University

Jinyu Zhao

Jinyu Zhao

eBay

Soumik Dey

Soumik Dey

eBay

Yilin Wang

Yilin Wang

Google/YouTube

Pooja Verlani

Pooja Verlani

Google

Zhengzhong Tu

Zhengzhong Tu

Texas A&M University

Workshop Schedule

One-Day
Schedule TBD

Paper Submission

Submissions Due TBD
Author Notification TBD
Camera-Ready Due TBD

Submission Guidelines

We welcome the following types of submissions: short papers (2-4 pages), and full papers (5-8 pages).


All submissions should follow the CVPR 2026 author guidelines.


Submission Portal: OpenReview