Joint View Expansion and Filtering for Automultiscopic 3D Displays
Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they preferably use correctly filtered content from multiple viewpoints. The filtered content, however, may not be easily obtained with current stereoscopic production pipelines. The proposed method and system takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that may be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and may be efficiently implemented on current GPUs to yield real-time performance. Furthermore, the ability to retarget disparity is naturally supported. The method is robust and works transparent materials, and specularities. The method provides superior results when compared to the state-of-the-art depth-based rendering methods. The method is showcased in the context of a real-time 3D videoconferencing system.
Researchers
-
joint view expansion and filtering for automultiscopic 3d displays
United States of America | Granted | 9,756,316
License this technology
Interested in this technology? Connect with our experienced licensing team to initiate the process.
Sign up for technology updates
Sign up now to receive the latest updates on cutting-edge technologies and innovations.