InfoQ Homepage Augmented Reality Content on InfoQ
-
Meta's Research SuperCluster for Real-Time Voice Translation AI Systems
A recent article from Engineering at Meta reveals how the company is building Research SuperCluster (RSC) infrastructure that is used for advancements in real-time voice translations, language processing, computer vision, and augmented reality (AR).
-
How Testing in the Metaverse Looks
The "metaverse" typically refers to a collective virtual shared space that is created by the convergence of a virtually enhanced physical reality and a persistent virtual reality. According to Jonathon Wright, testing requires a mix of manual testing, automated testing, user testing, emulators, and simulators. Real-world testing environments are used to cover as many scenarios as possible.
-
Meta Shares its Mixed-Reality Meta Horizon OS to Third Parties
Opening up the operating system that powers its Meta Quest devices to third-party hardware makers, Meta aims to create a larger ecosystem and make it easier for developers to create apps for larger audiences.
-
Apple Releases visionOS SDK to Developers Along with Reality Composer Pro
Developers can now download the software development kit required to create apps for its forthcoming Vision Pro mixed reality headset, Apple announced. Besides making the SDK available, Apple also unveiled a program to bring physical devices to selected labs around the world and more initiatives for developers to test their apps.
-
Meta AI Introduces the Segment Anything Model, a Game-Changing Model for Object Segmentation
Meta AI has introduced the Segment Anything Model (SAM), aiming to democratize image segmentation by introducing a new task, dataset, and model. The project features the Segment Anything Model (SAM) and the Segment Anything 1-Billion mask dataset (SA-1B), which is the most extensive segmentation dataset to date.
-
Immersive Stream for XR: Extended Reality Experiences from Google Cloud
Google Cloud recently announced the general availability of Immersive Stream for XR, a managed service to host, render, and stream 3D and extended reality (XR) experiences. The new service makes the rendering of 3D and augmented reality no longer dependent on the hardware of smartphones.
-
Facebook Develops New AI Model That Can Anticipate Future Actions
Facebook unveiled its latest machine-learning process called Anticipative Video Transformer (AVT), which is able to predict future actions by using visual interpretation. AVT works as an end-to-end attention-based model for action anticipation in videos.
-
ARKit 5 and RealityKit 2 Further Enhance iOS AR Capabilities
At WWDC21 Apple announced new major iterations for its ARKit and RealityKit frameworks to create augmented reality-based apps for iOS. Most significantly, RealityKit 2 will allow developers to easily create 3D models from a collection of pictures, while ARKit 5 expands face tracking and location anchor support.
-
Microsoft Announces a Hologram-Based Mixed-Reality Communication Platform Called Microsoft Mesh
During the recent virtual Ignite conference, Microsoft announced Microsoft Mesh, an Azure-based cloud platform allowing developers to build immersive, multi-user, cross-platform mixed reality apps. Customers can leverage Mesh to enhance virtual meetings, conduct virtual design sessions, assist remote work better, learn together virtually, and host virtual social gatherings and meet-ups.
-
MediaPipe Introduces Holistic Tracking for Mobile Devices
Holistic tracking is a new feature in MediaPipe that enables the simultaneous detection of body and hand pose and face landmarks on mobile devices. The three capabilities were previously already available separately but they are now combined in a single, highly optimized solution.
-
Google Releases Objectron Dataset for 3D Object Recognition AI
Google Research announced the release of Objectron, a machine-learning dataset for 3D object recognition. The dataset contains 15k video segments and 4M images with ground-truth annotations, along with tools for using the data to train AI models.
-
ARCore 1.20 Brings Persistence and Global Localization to Cloud Anchors
Two years ago, Google introduced Cloud Anchors in ARCore 1.2 to enable collaborative AR experiences across devices. In its latest release, ARCore removes a limitation in Cloud Anchors by providing support for full persistence. Additionally, ARCore 1.20 integrates with Google Earth to make it easier to find AR content.
-
iOS 14 Now Available, Developers Forced to Rush to Submit Apps
Apple has released the first public version of iOS 14, which brings a number of new features such as app clips, widgets, improved Swift UI, ARKit, Core ML, and more. Developers received the iOS and Xcode GM version a mere 24 hours in advance, though, which led to some frustration.
-
Google ARCore Depth API Now Available with Additional Sample Code
Released in closed beta at the end of last year, ARCore Depth is now available in ARCore 1.18. Since the initial announcement, Google has been working with selected partners to create compelling use cases of this technology.
-
Google ML Kit SDK Now Focuses on On-Device Machine Learning
Google has introduced a new ML Kit SDK aimed at working in standalone mode without requiring a tight integration with Firebase, as the original ML Kit SDK did. Additionally, it provides limited support for replacing its default models with custom ones for image labeling and object detection and tracking.