When Apple announced the Vision Pro, it described it not as a headset, but as a “spatial computer.” We’ve seen similar devices before from Microsoft, Meta, and Magic Leap, but those companies favor the more familiar terms extended reality (XR), virtual reality (VR), and augmented reality (AR).
So, what is spatial computing, and why did Apple CEO Tim Cook call it “the beginning of a new era for computing?”
What is a spatial computer?
In a 2003 MIT graduate thesis, Simon Greenwold defines spatial computing as “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” The thesis describes the fundamental objective of spatial computing as “the union of the real and computed.” Greenwold envisioned all sorts of devices with sensing and processing capabilities.
Twenty years later, a spatial computer is associated with a head-mounted display that detects objects, surfaces, and walls in your surroundings. Cameras, microphones, and sensors provide information to the processor to analyze and present useful information.
As a computer with awareness of its environment, it’s a step up from traditional towers and laptops, which can capture the outside world in some ways but still leave most of the analysis to us. Now we’re starting to get assistance with reality. It started with smartphones — we can ask new questions: How far is that? How long will it take to get there? What kind of flower is that?
In the future, we’ll all be wearing spatial computers. It’s the next step after smart glasses, which will help ease the transition from smartphones. You’ll be able to instantly see directions, hear translations, and request more details about anything around you.
Imagine a super powerful version of Google Lens, a measuring app, a translation app, a recommendation guide, and a custom audiovisual tutor available anytime you ask for help throughout your day. Now go even further.
A future spatial computer will completely replace every screen, printer, most computers, all tablets, all phones, and all watches. It will help you connect to others and put them in the room with you, even when they’re miles away. It will help you with all your work and personal tasks, greatly simplifying life. We’re not there yet; the Apple Vision Pro is just the beginning.
Spatial computer = reality computer
As a spatial computer, the Vision Pro interacts with the real world. The device scans its surroundings with lidar and color cameras to augment your experience with virtual screens, surround sound, and even three-dimensional objects.
When you turn or move, the Vision Pro adjusts the image displayed accordingly, as if the computer-generated elements on the screen are present in your room. Of course, your iPhone can handle AR also, placing an Ikea shelf in the corner with ARKit or showing an iPad on your table.
The Vision Pro goes further, filling your view with multiple browser screens, a giant TV screen, and friends or coworkers in a group chat. In some cases, the experience extends beyond the screen, wrapping an immersive, themed environment around you. Apple’s Vision Pro can operate within reality or completely transform it. That’s impressive, but it isn’t completely new.
Any VR headset with a passthrough view is a type of spatial computer that matches your movement to the displayed image. Meta, HTC, Pico, and others have similar capabilities, though not as accurate as Apple’s.
For example, Meta’s Quest Pro can overlay 3D graphics on your room, display multiple virtual screens, then switch to total immersion to display a 360-degree video in 3D. It can identify where the floor is, but it lacks a depth sensor, so you have to mark furniture manually. That limits how well graphics can interact with your surroundings.
High-end AR headsets, like Microsoft HoloLens 2 and Magic Leap 2, include depth mapping hardware so virtual objects can interact with the environment. However, the small field of view in the see-through displays spoils immersion and intuitive interaction. The edges become a constant reminder that this isn’t real, just like looking at AR effects through a smartphone.
Apple Vision Pro is a beginning
Apple’s Vision Pro could be the first device to get spatial computing right. However, it’s too expensive for most consumers, and the full extent of its capabilities is unclear. The Vision Pro probably isn’t the ultimate spatial computer. It’s the beginning of the AR future we’ve marveled at in science fiction movies for a couple of decades.
Apple’s Vision Pro is bulky, so it won’t be as convenient as the translucent computer interface in Minority Report or as powerful as Tony Stark’s Jarvis which intuitively displays relevant data with minimal input. However, it’s revolutionary in many ways.
The Vision Pro knows where you are in space, where you’re looking, notices the smallest gesture of your fingers, and detects when people are nearby. Two powerful processors provide sufficient performance to enable huge potential.
Apple barely scratched the surface of what’s possible when it announced the Vision Pro. As Meta learned, overhyping is a costly mistake in the VR industry. Apple made no mention of the metaverse or even VR gaming.
The Vision Pro will be more than just a wearable computer with FaceTime and immersive cinema. It’s only a matter of time before we learn that the Vision Pro is the basis for Apple’s version of an augmented and virtual layer over reality. That’s when the Vision Pro will come closer to the potential of the spatial computer of the future.
Editors’ Recommendations