The Inventors apply techniques from 3D physical simulation and structural analysis to the task of video synthesis. Small deformations of most solid objects can be well-approximated with linear modal analysis. The Inventors show that by applying this analysis to small deformations captured in video they can extract projections of an object’s deformation modes, and use these projections as a basis for the image-space simulation of that object’s dynamics.
The algorithm works by extracting a set of observed mode shapes from small deformations due to unknown excitations forces in an input video and using these as a modal basis for simulation of an object’s response to user defined forces. Local image space deformations are measured using phase variations in a complex steerable pyramid based on the input video. Displacement measures from the object’s rest state are evaluated for each video frame, then are filtered and denoised. The temporal FFT of the displacement signals are calculated and an output modal analysis is conducted on the resulting displacement spectra to select and normalize for the spectrum of forces that caused motion in the input video. The response of an object to force is its displacement and momentum, which can be represented as a superposition of recovered modes. Users can either pull or apply an impulse on the object using a computer interface, upon which a simulation is conducted based on the object’s recovered modes that make up its resultant displacement field.