arxiv preprint – BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion

In this episode we discuss BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion by Michael J. Black, Priyanka Patel, Joachim Tesch, Jinlong Yang. This paper presents BEDLAM, a large-scale synthetic dataset for 3D human pose and shape estimation. Unlike previous datasets, BEDLAM is realistic and diverse, featuring monocular RGB videos with ground-truth 3D bodies, including varied body shapes, motions, skin tones, hair, and realistic clothing. The dataset also includes realistic scenes with varying lighting and camera motions. Trained regressors using BEDLAM achieve state-of-the-art accuracy on real-image benchmarks, demonstrating the importance of good synthetic training data for accurate estimation. The BEDLAM dataset and resources are provided for research purposes, along with detailed information about the data generation process.


Posted

in

by

Tags: