Vision Pro

Posted on May 14, 2025

A little over a year ago Apple was about to release the Vision Pro, their first major product launch in a decade. I had recently shut down Fido, my dog training venture, and I decided to make a little toy app for the Vision Pro as a way to keep busy and stimulate new ideas. I’d decided to call my project DogVision.

The concept for the app was really simple - it would use the device’s camera feed and apply a custom red-green colorblindness filter in real time. The intent was to show humans what the world looked like to their dogs. Dogs are colorblind and, although my research didn’t turn up exactly how the world looks to them, we do know that they see the world in blues and yellows and greys.

The project was one of my first forrays into the wonderful world of open source software, as understanding color-blindness filtering algorithms both from a technical standpoint as well as how it specifically applied to dogs was a niche topic on which I struggled to find much literature. I was fortunate to stumble upon this guide by Loren Petrich exploring exactly that. His work was quite dated - as I recall he had written his algorithm in the early 2000’s - but through the wonders of the internet and mathematics eternal I had a jumping off point to do my own work and write my Swift based version. I’m just realizing this but perhaps my fascination with creation began right here, when I got my first serendipitous unblock experience with the miracle that is open source. The Steve Jobs quote “I stand on the shoulders of giants” took on a new meaning as a result of my DogVision project.

But that’s not what this post is about. This post is about the insanity that is the Vision Pro.

Through the building of DogVision I discovered a fact that took my breath away: Vision Pro developers do not have access to the camera in the SDK.

What. Let me say that again: the Vision Pro, the revolutionary new computing platform created by the largest company in the world, the platform which literally has Vision in the name, does not give developers access to vision.

WHAT??

Excuse me but what is the point of the Vision Pro if you cannot Vision your way out of the dang ski goggles. The device is, from a reductive viewpoint, a screen projector with a camera on the other end of it. It was introduced and hailed as the device that would open the doors to our augmented reality future. How is that possible if as a developer we don’t have access to a reality to augment?

I have so many questions.

(1) What did Apple think developers were going to do with the device, if they couldn’t access the bridge between their user and the world around them?

(1a) As corrollary to (1), how many floating calculator apps did Apple think the world really needed? Is the only purpose for a developer to create more numerous instances of floating screens with no relevance or context to reality?

(2) Was Apple’s vision for the Vision Pro that users would be navigating their own personal Distraction Palaces eternal, digital projections of screens of all shapes and sizes and orientations and purposes to walk through and between and revel in?

(3) WHY? The immediate speculation I got from friends was the knee-jerk response to any question about Apple: “to protect user privacy.” But this makes no sense! How does preventing access to the camera protect user privacy, exactly? Apple has already solved the user privacy problem in the iPhone! Apps must request permission from the user to access camera functionality and apps must also be approved by the App Store Review. Were it not for these restrictions, I could just as easily violate user privacy using the iPhone camera. (Indeed, on discovering this blocker on the Vision Pro I went ahead and built DogVision for iOS. What a shame DogVision couldn’t be nearly as magical when it was no more than screen held in your hand.)

(4) Was this decision driven by fear? That somehow developers would be able to build too powerful or otherwise unbound an AR experience if they could access the camera directly, rather than the more high level and limited abstracted layer accessible via ARKit? I don’t see the argument here, it seems to make no sense, and yet I can’t think of another reason.

(5) What did Apple think was going to happen? Did they really think developers would build for a platform that was not just priced out of reach for 99% of Americans, that faced real drawbacks due to physical constraints, that was at best a beta product release, and that in addition tied developer’s both hands behind their backs?

(6) Did they expect that developers being reduced to creating notification centers, dashboards and wrappers of existing 2 dimensional screen based applications would lead to a thriving ecosystem? That it would lead to delightful user experiences? That the primordial chaos of enthused early creators creating floating screen wrappers would inspire a generation of developers and open the doors to a new computing paradigm?

No wonder the Vision Pro was a total dud. I wanted to believe. I bought a Vision Pro on the day it came out, with the goal of forming my independent viewpoint on what computing might look like in the future. My viewpoint ended up being that the Vision Pro was dead in the water and I’d formed my viewpoint before the device was even delivered, in the time between order and delivery while making DogVision.

On the bright side at least I was able to return my four thousand dollar device well within the 30 day return period.