Kfir Aberman (Google) - Learning the Structure of Motion

Return to Full Calendar
October 4, 2021 at 10:00am - 11:00am
JCL, Rm 298
Event Audience:
Kfir Aberman

Speaker: Kfir Aberman Research Scientist, Google Research

Kfir Aberman is a research scientist at Google Research in San Francisco. His research interests include deep neural network architectures for various computer graphics applications. In particular, his work is focused on analysis, synthesis, and manipulation of human motion in real videos as well as 3D character animation. Kfir received his Ph.D. from the Electrical Engineering department at Tel-Aviv University, under the supervision of Prof. Daniel Cohen-Or, and his M.Sc. (Cum Laude) and B.Sc. (Summa Cum Laude) from the Technion. He serves as a reviewer of various journals and conferences within the graphics community.

Abstract: Learning the Structure of Motion

Human motion is a fundamental attribute, underlying human actions, gestures, and behavior. It is inherently a 4D entity, commonly represented as a low-level encoding: a temporal sequence of poses, specified as a set of joint positions and/or angles. However, motion is a high-level, abstract, attribute that is concealed under this representation. For example, the same motion performed by two individuals with different skeletons might have significantly different low-level encoding. 

In this talk, we will present two learning frameworks that aim to learn the essence of motion and enables retargeting the motion of one performer to another. The first deals with the retargeting of video-captured motion between different human performers, without the need to explicitly reconstruct 3D poses and/or camera parameters, and the second is aiming at 3D motion retargeting between skeletons, which may have different structures and number of joints, yet derived from the same topology. 

The shared key idea in both frameworks is the disentanglement of a motion signal into fundamental, abstract, components that enables separating the dynamic aspects of motion from the static ones, using unique deep neural network structures. Our framework demonstrates state-of-the-art performances in both of the retargeting tasks, and can be further exploited for other motion analysis and synthesis tasks such as motion retrieval and motion style transfer.

Host: Rana Hanocka

Type: talk