
AR glasses are lightweight eyewear that project digital information and overlays onto transparent lenses.
ARCore is the framework that Google uses that allows Android devices to track the space around them and place virtual objects in the real physical world.
ARKit is Apple's framework that enables iOS devices to understand the space around them and place augmented reality content accurately within a space.
Apple Vision Pro is a spatial computing headset that blends apps, interfaces, and virtual 3D content into the real world.
Augmented Reality blends digital content with the real physical world around you, and it can display it through a phone, a tablet, or a headset.
A depth sensor is a type of camera that measures the distance between a device and real-world objects all around them to build a 3D map and mesh of a space.
Drift can happen in AR experiences when digital objects shift away from where they were placed due to tracking issues or lag.
Eye tracking is a technology that can monitor where a user is looking, so AR devices and interfaces can react to gaze.
Field of view is the measure of how much of a digital scene is visible through AR lenses.
Frame rate measures how many frames per second are rendered within your AR experience.
GLB is a compact and standardized 3D file format. It is optimal for AR to store 3D geometry, textures, and animations.
A Gaussian splat is a 3D representation of a space or object made from point clouds and rendered as smoothed-out, blurred spheres.
Image anchors allows augmented reality devices to recognize a 2D image and anchor digital content to it and the surrounding environment.
LiDAR is a type of depth sensor and 3D scanning technology that measures the distance between a sensor and its surroundings with laser pulses. This generates a 3D map for augmented reality. LiDAR stands for Light Detection and Ranging.
Location-based AR is for augmented reality experiences that are tied to a physical place or context. These use GPS and spatial anchoring techniques to place digital content reliably in the same place.
A Material defines how a 3D model looks in AR. It controls its surface, its texture, its color, and its reflectivity.
Mesh optimization is the process of reducing the complexity of a 3D mesh to improve performance for real-time graphics.
MetaQuest 3 headsets are standalone head-mounted displays that support virtual reality and passthrough augmented reality experiences.
Occlusion allows augmented reality and virtual objects to appear behind physical objects to create the illusion of realistic depth.
PBR, or Physically Based Rendering, is a rendering method that simulates realistic material behavior based on lighting.
Passthrough refers to the technique of showing the real world via a combination of cameras and close-up screens within an AR headset. This enables users to see their environment even though their eyes are covered.
Plane detection is a technology that can identify flat surfaces, like floors, tables, and walls, so augmented reality objects can be placed upon them.
Polycount refers to the number of polygons within a 3D model. The larger the polycount, the more memory a model will take. This affects performance, lag, and visual detail.
A QR code is a scannable marker that can trigger a digital experience like a website or app download. In AR, it can trigger an immersive experience or be a piece of an image anchor.
SLAM, or Simultaneous Localization and Mapping, is a technique that allows AR devices to track their position within space while building the map of the environment.
6DOF, or six degrees of freedom, is a term that describes how AR headsets and controllers can track both position and rotation within 3D space.
Spatial audio allows sounds to play as if they come from fixed positions all around your space or environment.
Spatial computing refers to tech that understands the physical space around you and blends digital content into it seamlessly and interactively. This is a broader term than Augmented Reality and it includes Virtual Reality and other spatially-aware tech as well.
A visual positioning system (VPS) uses the camera to scan a space and recognize landmarks all around your environment to determine a device's exact location in space.
A World Mesh is a 3D representation of a real-world environment. This is built from vertices and triangles and created from a depth sensor.
World tracking enables augmented reality devices to understand their location within a physical space and keep digital 3D models anchored in their surroundings.



