BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News ARKit 5 and RealityKit 2 Further Enhance iOS AR Capabilities

ARKit 5 and RealityKit 2 Further Enhance iOS AR Capabilities

This item in japanese

At WWDC21 Apple announced new major iterations for its ARKit and RealityKit frameworks to create augmented reality-based apps for iOS. Most significantly, RealityKit 2 will allow developers to easily create 3D models from a collection of pictures, while ARKit 5 expands face tracking and location anchor support.

With the new Object Capture API, part of RealityKit 2 and bundled with macOS 12 Monterey, developers will be able to create 3D models from pictures taken on any high-resolution camera, including iPhone's and iPad's. This uses a process called photogrammetry, where you provide a series of pictures taken from various angles, making sure to avoid objects that are too thin in one dimension or highly reflective.

The number of pictures that RealityKit needs in order to create an accurate 3D representation varies depending on the complexity and size of the object, but adjacent shots must have substantial overlap.

According to Apple, you should aim to have at least 70% overlap between successive shots, and never go below 50%. Object Capture is able to take advantage of depth information when available to improve the output model.

Once you have a sufficient number of pictures of your object, creating a 3D model out of them is pretty much a matter of executing some boilerplate code customizing it to your needs. Decidedly, the critical part of the whole process is capturing high-quality images. To simplify things, Apple showed two sample apps, one to take pictures on iOS devices equipped with dual rear camera able to measure depth and gravity data, and a command line tool for macOS to streamline the process of creating the 3D model from the images.

Object Capture is not the only new feature in RealityKit 2. Apple also introduced support for custom shaders, which give developers more control over the rendering pipeline. This will make it possible to fine tune the look and feel of AR objects and scenes. Additionally, RealityKit 2 is now able to generate procedural mesh, which is a great improvement unlocking new possibilities beyond boxes, spheres, text, or planes, which were supported in RealityKit 1. Another promising new feature is the ability to build custom Entity Component Systems to organize AR assets and make it simpler to create complex AR apps.

Being more mature, ARKit 5 is not so rich in new features as RealityKit. In fact, ARKit 5 expands existing functionality, including face tracking support, and brings support for location anchors.

Face tracking is now possible using the front-facing camera on any device with the A12 Bionic chip and later and is able to detect up to three faces at once.

Location anchors, on the other hand, enable the possibility of anchoring an AR scene to a physical location, such as a city or a famous landmark. This makes it possible to show virtual signposts when the user approaches a street, a monument, etc.

Rate this Article

Adoption
Style

BT