Image Image Image Image Image Image Image Image Image Image

Oculus VR News | March 28, 2024

Scroll to top

Top

Facebook 'Neural Supersampling' Improves VR Rendering Performance

Facebook 'Neural Supersampling' Improves VR Rendering Performance

Image courtesy of: Oculus

FRΛNK R.

Oculus’ parent company Facebook has developed a technique for supersampling real-time rendered 3D content, using a machine learning-based approach that could enable much higher-resolution graphics in games and applications on future VR headsets.

In a Siggraph 2020 technical paper entitled Neural Supersampling for Real-Time Rendering,” the Facebook research team explains how the use of neural networks allowed for the team to develop a system capable of taking inputs from modern game engines, such as color, depth, and motion vectors at a lower resolution, and upsampling the input imagery to the target high-resolution output suitable for real-time rendering. This upsampling process using neural networks, and allowing for enough training data to gather to optimize a scene, is said to restore sharp details while saving computational overhead.

“Our approach is the first learned supersampling method that achieves significant 16x supersampling of rendered content with high spatial and temporal fidelity, outperforming prior work by a large margin,” a blog post by Facebook Reality Labs (FRL) Researcher, Lei Xiao explains.

Xiao added: “To reduce the rendering cost for high-resolution displays, our method works from an input image that has 16 times fewer pixels than the desired output. For example, if the target display has a resolution of 3840×2160, then our network starts with a 960×540 input image rendered by game engines, and upsamples it to the target display resolution as a post-process in real-time.”

The FRL research team notes that Nvidia’s Deep Learning Super Sampling (DLSS) is the closest approach to their solution. However, methods like DLSS, are known to either introduce “obvious visual artifacts into the upsampled images, especially at upsampling ratios higher than 2×2,” or rely on proprietary software and hardware that may not be available across all platforms. Whereas Facebook’s method suggests, it will not require special hardware or software such as proprietary drivers (like with DLSS) and can be integrated easily with modern 3D game engines, which makes it more applicable to a wider variety of existing software platforms, acceleration hardware, and displays.

“As AR/VR displays reach toward higher resolutions, faster frame rates, and enhanced photorealism, neural supersampling methods may be key for reproducing sharp details by inferring them from scene data, rather than directly rendering them. This work points toward a future for high-resolution VR that isn’t just about the displays, but also the algorithms required to practically drive them,” Xiao concluded.

If you want to take a more in-depth deep dive into the research from Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, and Anton Kaplanyan of Facebook Reality Labs, you can read the full whitepaper here.