BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News ARKit 3 Brings People Occlusion, Motion Capture, and More

ARKit 3 Brings People Occlusion, Motion Capture, and More

This item in japanese

Bookmarks

ARKit 3, recently announced at WWDC 2019, moves further into the direction of more immersive augmented reality experiences by adding support for integrating people with virtual objects and injecting human motion into the AR experience. Other new features in ARKit 3 are multiple face tracking, support for simultaneous use of front and back camera, and collaborative sessions.

People occlusion makes it possible to mix people and virtual objects in the same AR scene achieving a realistic immersion effect. ARKit 3 is able to understand where people are and position them so they appear in front of other people or objects, thus partially occluding them. This is a significant change respect to ARKit 2, where virtual objects added to a scene would always appear on top of the image captured by the camera, as if floating on top of a scene. This new feature in ARKit 3 will allow people to be seen as moving around (and even through) virtual objects placed in the AR world.

Motion capture brings even further the awareness of human bodies by making it possible to track and use body movements as input for the same AR scene. In a demo video, Apple showed a synthetic robot alongside a real person which was able to mimic the real person body movements with good responsiveness and precision. Judging from the demo minimalism, motion capture is likely still in its infancy but the ability to use body movement as input to the AR scene appears to promise to open many new UX possibilities, as Apple itself showed by demoing a game that allows people to compete on a virtual bowling alley trying to hit a giant ball and make it knock down bowling pins set behind each opponent.

Another new feature in ARKit that seems to be related to the possibility of providing input and command to an app in novel ways is the simultaneous activation of both the front and back camera. Leveraging the ability of TrueDepth front-facing camera of detecting and tracking face movements, this could for example enable interacting with virtual content created in the world view captured by the back camera using the user's facial expression or head movements. And speaking of face detection, ARKit is now able to detect and track the movements of up to three faces on devices with a TrueDepth front-facing camera.

ARKit 3 also advances the possibilities of creating shared AR experiences, by introducing collaborative sessions, i.e., sessions where you instruct ARKit to periodically share your world map, which contains a snapshot of all spatial mapping information ARKit uses to locate your device in real-world space, with other users. Using ARKit 2, you had to choose the right moment to create the world map snapshot and serialize it on your own before sending it. With ARKit 3, you only need to be concerned with sending the data over the network, while the framework deals with choosing when to share data and serialize it.

ARKit 3 also includes a number of incremental improvements to previously existing features, such as faster reference image loading, the ability to detect up to 100 images at a time, automatic image size estimation, and more robust 3D object detection.

ARKit 3 is available in beta with Xcode 11 and iOS 13 to registered developers.

Rate this Article

Adoption
Style

BT