BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Facebook Open-Sources DeepFocus, Bringing More Realistic Images to Virtual Reality

Facebook Open-Sources DeepFocus, Bringing More Realistic Images to Virtual Reality

This item in japanese

In a recent blog post, Facebook announced they have open-sourced DeepFocus, an AI powered framework for improving focus on close objects. This technology ensures nearby objects are in-focus, while distant objects appear out of focus, much like cinematic experiences. DeepFocus takes advantage of an end-to-end convolutional neural network that produces an accurate retinal blur in near real-time.

DeepFocus is the underlying technology behind Half Dome, a prototype headset that leverages eye-tracking cameras, wide field of view optics and independently focused displays which provide lifelike VR experiences.

A multidisciplinary team of researchers at Facebook Reality Labs (FRL) developed DeepFocus with a goal to deliver “experiences that are indistinguishable from reality”. But, Virtual reality (VR) applications typically focus on objects that are in your mid-distance as it matches the focal plane in which the image is in-focus. Maria Fernandez Guajardo, head of enterprise AR/VR at Facebook, explains the challenge with this approach:

If you try to look at something that is not in the focal plane, like an object that is close to you, things become blurry. To work around this problem, the VR industry has placed objects at a distance of 2 meters. This is limiting and it’s not realistic. Great VR has to work with objects that are close to you.

Image source: https://www.oculus.com/blog/introducing-deepfocus-the-ai-rendering-system-powering-half-dome/

The Facebook team explored traditional approaches to optimize computational displays, but the results did not align with expectations:

Traditional approaches, such as using an accumulation buffer, can achieve physically accurate defocused blur. But they can’t produce the effect in real time for sophisticated, rich content, because the processing demands are too high for even state-of-the-art chips.

Instead of waiting for chipsets to advance and reduce in price, the Facebook team used a deep learning and developed an end-to-end convolutional neural network. Deep learning is already used by AI systems to carry out specific tasks by training on large data sets of relevant data, but has generally not be applied to VR systems. The benefits of using a deep learning with VR include:

Producing the image with accurate retinal blur as soon as the eye looks at different parts of a scene by including new volume-preserving interleaving layers to reduce the spatial dimensions of the input while fully preserving image details. The convolutional layers of the network then operate on the same, reduced spatial resolution, with significantly reduced runtime.

The results of this deep learning technology were demonstrated in a recent presentation by Guajardo where she showcased a traditional experience, where close-up objects are blurry compared to one based upon DeepFocus where an object becomes clearer as it is brought forward.

Image source: (screenshot) https://www.youtube.com/watch?v=FM7aviAhxG4

DeepFocus relies on standard RGB-D color and depth input. It has broad applicability to existing VR games and is compatible with existing headsets being evaluated by the research community, including varifocal displays (like Half Dome), multifocal displays and light field displays.

The source code, network models and data sets for DeepFocus can be found in Facebook Research’s GitHub repository.

Rate this Article

Adoption
Style

BT