Unauthoritative Pronouncements

Subscribe About

Joshing Ya

Joshua trees at dusk.

One feature of the Vision Pro that reviewers and buyers alike seem to really appreciate is the ability to dial in an environment. Casey Liss joked on Mastodon:

How long until a YouTuber goes to Haleakala, Joshua Tree, Yosemite, and Mount Hood, and finds the exact points the Vision Pro environments were captured?

Bonus points for the moon, obviously.

I’m sure someone is going to do it (definitely not the moon part), but the environments aren’t purely photographic things. That’s not a judgement just that if you got as close as you could to what was depicted you’d still never get it exact, so don’t sweat it.

It does make me think about how much I love the places on the environments list that I have been to. When I did my demo I didn’t really feel like I was in those environments. That’s only because I’m viewing that environment in the context of my own experience, and faulty memories. I’ve been to Haleakalā, Joshua Tree, and Yosemite, but I didn’t feel as direct a connection to those places as I would have liked. The exception being Joshua Tree, which was the closest to matching my feelings about the place even if I didn’t feel like I ever stood in that exact spot.

It’s like looking at a good publicity photo of places you’ve vacationed as opposed to a personal photo you took which have attached meaning, but aren’t as cleanly executed or specific.

Maybe we should all go to Joshua Tree? I’ve been a lot, but you’re probably some East Coast bum, like Casey, and haven’t really thought about going. Let me give you some travel tips.

Getting to Joshua Tree

The best months to go to Joshua Tree are between October and May. The summers are simply too hot. It can get very, very cold, especially at night, in the winter, but it’s much more comfortable. It can, occasionally, snow, but you really have to race to catch a glimpse because it won’t stick.

Snow! December 2008

Joshua Tree National Park, and the adjacent Yucca Valley, sit above Palm Springs and the rest of the Coachella Valley. You can fly in to Palm Springs, and rent a car for the short drive up to Joshua Tree, you can travel from Ontario (California, not Canada) International Airport, or one of the further Los Angeles area airports, and sit in a snarl of traffic for hours.

Even if you’re local enough that you’ll want to drive to Joshua Tree, you’re better off not going there and back in the same day. Definitely plan to spend the night in Yucca Valley. There’s no great, big lodge inside of the National Park, like some of the others, but you can camp, if you’re the weird type of person who wants to camp in a desert and use chemical toilets.

A little wooden building with a sign that says Pioneertown Motel.
Pioneertown Motel check-in. January 2023.

Pioneertown Motel is a great place to crash for those funky desert vibes, and it even has Joshua trees on the property. It’s great for stargazing at night because it’s beyond the light pollution (and pollution-pollution) of the LA Metro area.

Night time. A Joshua tree is in the center of frame, with a low motel dotted with orange light sitting in the distance. The sky is dotted with stars and a faint orange glow on the horizon.
Pioneertown Motel at night. January 2022.

Pioneertown itself is a collection of old west facades from when the area was used to make Westerns. They have stores selling various tchotchkes, but it’s not a grandiose theme park.

For food you have The Red Dog Saloon, and Pappy and Harriet’s. Both are in Pioneertown itself, and easy walks. Taking the road back to 29 Palms Highway lands you at Frontier Cafe which is a good (but potentially busy!) spot to get coffee, breakfast, and sandwiches to take with you to the park, or just stroll around the shops at the highway intersection.

The Park

Arrive at Joshua Tree National Park early and use your National Parks pass to enter. There are three entrances, and only the most popular West Entrance on Quail Springs Road has an entry booth. They usually have one lane set aside for passholders, but when the non-passholder lane backs it up it backs up down the road that leads to the park and you’re just stuck.

The other two park entrances don’t have booths monitoring entry because they’re far less popular, but you must display your pass or pay for entry. Sometimes we’ll start at Cottonwood Springs Road off the 10 and wind through to Quail Springs Road, or vice versa. Usually we’ll just go in and out of the Quail Springs Road entrance since it’s the closest to where we’d be staying, and other services. Those other entrances do have visitor centers.

Cellphone service in the park is almost non-existant except for the area by the West Entrance, and the Quail Springs picnic area, so get your offline maps downloaded.


My boyfriend’s favorite (and punishing) hike is Ryan Mountain Trail. You are completely exposed to the elements so remember your sunscreen, and windbreakers in the winter. The view from the top is fantastic, but it’ll really wipe you out to get up there. Unless you’re an avid hiker you’ll probably be done for the day after that. This is also a popular trail so you’ll be stopping to let people pass, or listening to assholes lugging bluetooth speakers on the trail.

A view down into the Park from a rocky, desert peak, looking at the scabrous valley floor below. The sun beats down on it all.
The peak of Ryan Mountain looking back towards the west. You can't even see the Joshua trees from this height, they're just dots. January 2023.
A view down into the Park from a rocky, desert peak, looking at the scabrous valley floor below. The sun beats down on it all. A Joshua tree is in the foreground to the left.
Panorama from the most of the way to the peak of Ryan Mountain. November 2020.

My favorite stuff is all the stuff that’s not really a hike. Which is a cop out, I know, but it’s nice to just experience the place without hitting cardio goals. That includes the Cap Rock Nature Trail where you meander around a winding trail that has little placards explaining what various plants are, and there are huge boulders up around you.

The Cap Rock Nature Trail.

The Baker Dam Nature Trail is similar, if a little larger, and includes an old, not-that-impressive dam.

Minerva Hoyt Trail (but mostly just its parking lot) is probably the largest expanse of Joshua trees you’re going to see. You’re just surrounded by the towering things.

Joshua trees at sunset with large boulders behind.
Minerva Hoyt parking lot. January 2020.

Keys View is far out of the way, and really only offers a vantage point looking down at the Coachella Valley and San Jacinto Mountains. That’s not a bad thing, it’s just a long drive and then you’ll get back in the car and drive back the way you came, so you might not be satisfied if you have limited time.

Another quiet spot for a snack, or just a rest, is where Live Oak Road comes off of Park Boulevard (Quail Springs Road turns into Park Boulevard inside the park). There’s nothing impressive here but it’s quiet because most people don’t have a reason to stop here. You might also see some ground squirrels.

My favorite spot in the whole park is the Cholla Cactus Garden. It’s closer to the Cottonwood Springs entrance than the West Entrance so it’s quite a drive, but worth it, especially in the late afternoon when the sun starts to get low, or just dip, behind the mountains. This spot in the park is covered with cholla cactus (and bees, a lot of bees). Which gives you a very different vibe from the towering Joshua trees. I’d love it if Apple captured the Cholla Cactus Garden as an environment for the Vision Pro, but only if you could walk around it’s little path.

Cholla cactus fill the lower half of frame and spread toward the hoorizon.
Cholla Cactus Garden. November 2020.

Maybe let us change viewpoints along the path? Like Myst, or the real estate web site version of Myst: Matterport. Anyway, walking through something more human-scale is part of the experience.

Hike. Die. Repeat.

You can spend a few days in the Joshua Tree going in and out of the park, and the area vibing to the funky desert. Have a spiritual awakening, or whatever, I don’t care. When you finish you can pop down to Palm Springs for tiki cocktails and a swim before you head home.

I’m not sure it will increase the connection you have when you look at the Joshua Tree environment in visionOS, but it’ll certainly give you a more three dimensional view of the place.

2024-02-22 10:15:00

Category: text

My Apple Vision Pro Demo Experience

When I was running errands I stopped into an Apple Store and did the Vision Pro demo. I doubt my impressions of a product released weeks ago, and thoroughly reviewed by experienced professionals, are of any interest but I provide them free of charge for those that have nothing better to read.

As per usual, the flow in the store is really unintuitive. I arrived with my QR code and the greeter sent me to Angela. Angela scanned the QR code and then told me to wait by the MacBook Pros along the wall. That immediate wave of anxiety about being put in an invisible DMV line overcame me, but my wait was really brief. Eric was there to do the demo with me, but Angela was around supporting Eric. Seemed like he was being trained on the process.

Eric handed me an iPhone to do a face scan, and for some reason it didn’t want to read my face when I turned my head to the left unless I moved the iPhone. I’m not sure but I think the backlit store display behind my head might have caused a problem, because every other direction worked both times I did the scan. It did not instill confidence.

Before my appointment I had selected that I use eyeglasses in the app, and so I presented them to be scanned. My prescription is not especially strong, and I can see fine without my glasses but experience eyestrain over time, so I might as well make sure things are as sharp as sharp can be for the demo. The machine that scanned my eyes produced a QR code. Angela was standing to the side during this, and mentioned that it was so Apple didn’t save or know my prescription. Which I guess is comforting, but seems kind of surface-level privacy if I buy prescription inserts? Honestly, I’m just in awe of the QR codes to get more QR codes to get more QR codes.

Then back to the stools where we sat. Eric made idle chit chat about what made me want to do the demo, then the device was brought out on a tray with the inserts. The insert pairing didn’t happen until after calibration, which I thought was weird, but whatever.

Part of the reason I didn’t want to do this was because the thought of a shared face-product creeped me out, but the device appeared to be clean, and not like one of those abused demo products tethered to a display at a Best Buy where the only concern is theft. I’ll update this if I get pink eye.

Eric told me, and then demonstrated, how to pick up the device with my thumb under the nose bridge, and then four fingers on the top and then you flip it. Which… this is just too precious a gesture for a device with this heft, but when I went to pick it up a finger brushed the light seal too hard and it popped off so I guess those magnets really are as weak as everyone says. We can’t have clips? Tabs? A little space-age velcro? It has to be weaker than those flat, business-card fridge magnets?

I’m not convinced that the face scan picked the right light seal for me (felt like all the weight was on my brow and the nose and cheeks keep having light poke through if I raised or furrowed my eyebrow). I tried moving it up and down on my face and tightening the head strap knob. It was never comfortable, but also never felt rock solid on my face. When It was higher on my face things seemed to vibrate by how insecure the lower portion was, if it was too low on my face it was impossible to see clearly.

The inserts needed to be paired, and there was the calibration stuff, and that was all fine. I had expected the passthrough to be dim, compared to the brightness of the inside of an Apple Store, and it sure was. I didn’t see any video artifacts, but it felt like I had sunglasses on that were tamping down the luminance.

What I thought I was prepared for, and I wasn’t, was the intense tunnel effect. In the video reviews I watched attempts were made to create an approximation of the binocular-like effect of having ones field of view constrained, but in my opinion the simulated views have been too generous. What really added to it was the chromatic aberration and how strong the sharpness fell off from the center. How much of that falloff was from the foveated rendering, and how much was softness from my own vision, or these particular Zeiss inserts, I can’t say. It just wasn’t sharp outside of this cone in the center of my vision.

This was most notable when we got to the part of the demo where Eric had me open Safari and he relayed the scripted response about how sharp and readable the text was, and how it could be used for work … and I did not share these feelings. The text I was looking directly at was clear enough, but three lines up, or down was fuzzy, and likewise side-to-side. I never felt like everything was blurry, but definitely made my view feel smaller than if I was looking at an equally large real-world billboard of text.

The chromatic aberration was very distracting when watching media, but I can’t say how much of that is my sensitivity to such visual phenomena, or if maybe I would get used to any of it.

The Spatial Videos also bothered me, but I know they’ve had rave reviews from almost everyone. I see the shimmer and sizzle in stuff that doesn’t appear to be matching, and it’s not “ghostly” like I’ve seen some describe. That video where the kids are crouched in the grass and we see the underside of the kid’s sneakers had a glow on the sole of the shoes and the grass. According to Eric that was shot with a Vision Pro, which means matching cameras, so I have no good explanation for the artifacts I saw, just that my eye was drawn there. Without applying a rigorous round of tests, I don’t know if the artifacts I felt like I was seeing were in the recorded content, or maybe something else. My first step would be to compare left and right eye views, but alternating blinks weren’t enough in the demo.

Similarly the video shot on the iPhone 15 had a lot of the expected shimmer in bits and pieces throughout. I know that this is unobjectionable to every other human that’s looked through these lenses, so I’m just cursed to notice it.

Mount Hood was impressive, but that ever-present green dot for the Control Center really ruins the mood. Also unlike the static panoramas, everything that’s moving has an air of artificiality to it where it seems a little too perfect. Better in appearance than a video game, but the kind of things that underly a video game environment with looping stuff. I wish when I moved my head around that there was something closer to me that would parallax so it felt less like a nodal camera move. No matter how much I loved my head it felt like everything was on a dome and all I was doing was rotating the center of the dome, never moving from the infinite point of the center.

The Super Mario Bros. movie clip in stereo wasn’t a good showcase of stereo or of the cinema viewing experience. It felt like a very large TV 5 feet away from me. I didn’t feel like I was in a theater, and nothing in the clip wowed me. I’m not sure why this particular movie was selected for the demos, as it doesn’t really shine.

My demo did not include the dinosaur. I don’t know why, and didn’t want to derail the scripted experience by asking. Maybe the dinosaur was down for maintenance?

The sizzle reel of immersive content was as impressive as everyone has said —in particular the sharks. I wonder how realistic it is to invest in shooting material like this just for these headsets, but that’s not my problem, and hopefully not a deterrent to Apple recording more of this.

Eric wrapped the demo and started to tell me how to take it off, but I asked if it would be OK to see some of the other environments beforehand. He graciously let me (no one was waiting for a demo Wednesday morning and we’re well beyond the crush of launch day) so I pulled up Haleakalā, Joshua Tree, and Yosemite.

Haleakalā is visually very impressive, but interestingly didn’t make me feel like I felt when I was at Haleakalā many years ago. I had a similar experience with the Yosemite one, but that might have been more seasonal. There was snow in parts of Yosemite on my trip to it, but it wasn’t snow-covered like this. I just didn’t associate it at all with my memory of the place.

Joshua Tree was the most evocative of the real place, and I’ve been there many times, but I couldn’t really get my bearings. It felt like somewhere in the park, for sure, but one of the things I always think about is the road cutting through the park that’s always very visible (joshua trees do not have lush, view-blocking canopies). It rang the most true so if I were to pick one to use a minimalist markdown editor in, it would be Joshua Tree.

I took the Vision Pro off like how I had been instructed to put it on, and Eric asked if I was considering buying one or if I was just looking. He didn’t do it in a car salesman kind of way, but the way in which he needed to know if he had to have someone fetch one from the back. I declined, and he offered me the QR code that had the configuration from my demo in case I ever wanted to come back and do another demo, or buy a Vision Pro later. I accepted this final exchange in the QR code ballet and thanked him once more before I headed out.

It’s a very interesting product, and I’m glad that I experienced it first-hand instead of just speculating endlessly about why it wasn’t for me. I don’t envision myself ever purchasing this iteration of the product, but I don’t think anyone’s a fool for buying one if they have the disposable income. Perhaps if Apple adds just one more step in the Apple Store process that uses a QR code I’ll reconsider.

2024-02-21 14:15:00

Category: text

No Simple Answers In Stereo

There was some continued back and forth on Mastodon about stereo conversions. Mac Stories contributor Jonathan Reed asked a couple questions:

What’s your view on the best converted movies? Do you think they hold up just as well vs (non-bad) native 3D movies?

I am not picking on Jonathan, but crediting him with a question that seems very reasonable. It seems logical to ask for an example of what’s working, but that’s much more difficult to do than it sounds. It’s kind of like proving a negative (even though this is a positive?)

If there is a best conversion, you’re unlikely to be aware of it at all, because the audience usually only remembers technical errors, or discomfort. There’s nothing outwardly impressive about a good conversion, or good native stereo, and anything that was held up as a good conversion would be picked apart with intense scrutiny to prove that it’s not actually good.

Add on top of that the point Todd Vaziri and I were trying to make in that thread, and in our feedback to the Accidental Tech Podcast, that there are a variety of methods employed in various shots in various movies. It is not as homogeneous as it appears to be in untrustworthy marketing, or that silly “real vs. fake” site.

There’s no binary bit on the movie that flips if one shot in a native 3D movie is a post conversion shot, what percentage of shots need to be rendered fully 3D in an animated feature, or a blockbuster with 2,000+ effects heavy shots.

I know that is deeply unsatisfying as an answer, and the follow up question would be for movies that don’t work well. For professional reasons I wouldn’t ever spell that out.

Ultimately, I know that just saying that it’s nuanced and complicated is not very helpful or informative to people that want to understand stereo. For those that want a quick answer on whether a movie is worth watching in 3D on their Apple Vision Pro, there’s nothing so simple as a list.

The best I can do is talk through common problems in stereo. To do that we need to talk about some terms. To do that I’m going to need to bore the fuck out of you.

Native Stereo Photography

This is shot with two cameras. Usually this involves a cumbersome rig where the cameras have to be exactly aligned, have matching apertures, matching focal distances, and are slightly physically offset. To get them close enough together they’re often arranged with one camera pointing straight up, and it gets it’s light from a beam splitter. A semi-transparent plane of glass that lets light pass through to the main camera, but also reflects down to the vertical camera. This is an enormous pain in the ass, and it’s very easy to have something be just a little off in a way that won’t be clear until later.

When the stereographer and director finish shooting, they can adjust the convergence by horizontally transforming the photography which pushes and pulls things in order out of the screen depending on where the left and right eye converge. However, they can’t adjust the interaxial without throwing away one of the eyes and doing it over with conversion. That means they might be more conservative in all of their choices to reduce the chance that there’s an error.

Things to look for are misalignment. If the left and right eyes have an angular difference between them, or skew slightly. Your eyes are looking for horizontal disparity so vertical shifts mess it up a little. This is abundant in iPhone 15 Pro Spatial Videos because of Apple’s attempts to compensate for the mismatched lenses.

Another big thing is color shifts from the beam splitter. Sometimes that could manifest as a constant shift, or it could be transient if the camera rig is moving and the light catches differently. It’s possible to color correct the views to get a closer match but uncorrected differences might appear to shimmer when your brain processes the slightly different hues and values.

Specular reflections. Think of bright pings of light on glass or chrome, often from a distant, but bright light source. One eye might get the ping of light and the other eye doesn’t. A mismatch like that can appear to glow, or shimmer, and could be uncomfortable to look at. To correct for this in native stereo the ping might be artificially copied and offset to the other eye, or the value of the ping might be knocked down so it doesn’t draw attention.

If you have a visual effects shot where native stereo plate photography is combined with rendered assets you might see issues that you wouldn’t get if it was a post conversion. Like a bluescreen or greenscreen shot where the work done to extract the photographic element from the screen color where the extraction is not an exact match. A common issue is flyaway hairs, those thin wisps of hair that are always difficult, could be in one eye but not the other, or trimmed in an odd way.

Flyaway hairs in non-VFX native stereo shots should always look pretty good, but depending on how deep the background is behind them you might be surprised to notice them more than you would in a 2D movie.

This doubling of work - and the need for it to match - is also what makes something like wire removal paint much more difficult. It’s easy to make each single view of paint be internally consistent and work, but then to make sure those paint adjustments match between both photographic plates is a pain that you don’t have to deal with in conversion.

It used to be very difficult to get 3D matchmove solves that were rock solid for both eyes. Meaning something could appear to float away where native stereo photography and CG met. Very rare, but maybe if you’re watching something old things might seem to drift or breathe.

Another thing 2D VFX artists take for granted is being able to use masks/mattes/rotosplines - and only having to do it once without thinking about where the matte sits in depth. The matte could be used to grade the background, or it could be to help extract a person. Those rotosplines need to be done for two eyes, and they need to match the plate photography, and their companion spline, including motionblur. A soft mask extending back along an angled surface will need to have depth that matches that angle in the other eye. So you end up doing the post conversion kinds of steps on the mattes applied to your native stereo left and right images to make them match and sit in depth, but are constrained to the native stereo plates as well.

Native Stereo Renders

Native stereo renders in animated movies, or for shots in a VFX heavy movie that don’t have photography, have their own pros and cons. Even those “all CG” shots are not always fully rendered in stereo for left and right eyes. The flat version of the movie will be done while the stereo version of the movie lags behind a little bit. That means that rendering the offset eye might reveal issues where an old version of a shader was used, or an asset changed since the original shot completed. It can be much more of a puzzle.

People also can do anything they want to with their cameras because they are no longer constrained by physics. That means you either get mind-bending stuff, like stuff sticking out of the screen that would really be a considerable distance away from the audience, requiring enormous interaxial camera offsets, or sometimes they’ll just make it really flat, even though they have the ability to do whatever they want.

You also still have some of the same issues presented by bright specular pings being in one eye, but not the other, but also that they might sizzle because bright, distant light sources need more raytracing samples (tiny thing far away gets more missed rays than hits).

You might be like, so what? Just turn up the samples, right? That’s easier said than done in some cases, especially if the 2D version of the movie is done already, or the rendering engine just can’t resolve some very bright, distant point of light with enough samples that won’t take 3 months to render. The sample noise will sizzle differently between the two eyes and appear to glow. There are ways to cut out pixels from the other eye, or median filter it, or what have you, but if it’s uncorrected you’ll see sizzling pixels.

Native stereo renders do have one fun trick and that’s the depth map (Z) channel that is normally used for depth of field focus effects. It is a image where every pixel corresponds to how far away something is from camera. It can be used to create an exact offset based on the stereo camera pair. This makes it kind of like post conversion where fake depth is used to offset 2D data from one camera view to the other. That means you can offset things like rotosplines, or other 2D elements, to match the depth of your 3D exactly. I do mean, exactly, since it will be at exactly the depth from the depth channel. Effectively like using a projector from the location of your left eye camera, and then viewing it from the location of your right eye camera.

This also means that parts of the render from the left or the right eye can be offset by the depth data to patch or supplement renders from the other eye. Think of it like sneaking in a little conversion. This can save render time, and help with various problems matching the eyes.

It can be as specific as using a render for parts of a character (eyes, fur, screen-right edges), or parts of lighting components (just the specular, just the reflection, refraction, or just the diffuse).

To a purist, it might sound like an anathema to mix and match, because a purist would assume that the highest quality is from matching renders. Really most people would fail a Pepsi challenge on fully rendered shots vs. hybridized shots. The philosophical concerns don’t matter as much as the final set of images being coherent.

For this reason it is absolute bunk to call all animated movies “real” 3D, or to be able to claim from your seat in an audience what’s rendered from scratch and what’s not.

Post Conversion

Conversions are popular because they require less time on set, use more flexible camera setups, and cause fewer problems for the crew that’s mainly concerned with the 2D version of the movie. That also means they can adjust the depth of everything ad infinitum. That can mean a more creative, and thoughtful use of stereo because they can evaluate the results and change it in a way they can’t do easily on set, where they are more likely to be conservative, or stuck with what they shot.

Conversions are also associated with people looking to make a quick buck on ticket sales, and reducing labor costs on the conversion to get as much profit as possible.

That means it’s likely you’ll see the places where conversions fall apart because of time and budget constraints, which was very common in earlier post conversions when studio execs felt like they needed to rush. You might recall movies where only part of the film was in stereo, and they wanted you to take on or off your glasses in the theater.

There was also the quality issue from the assumption that people were going to watch these in theaters were they couldn’t hit pause. The home video part never panned out - but maybe it will with products like the Apple Vision Pro.

Major issues stem from the approach a conversion house takes when presented with 2D footage. Most places will create a 3D space in the computer and camera in order to “accurately” produce the offset eye. I’ve heard of some places where people just cut out and move stuff around to wherever it feels right for them, or use image based algorithms to create a fake depth map to drive the stereo offset, but the map might have holes and errors where the algorithm guessed wrong. I’ve never worked at a place that did these things so I don’t have insight to share about their thought process, so let’s move on to placed-in-3D stuff.

To do that, matchmove needs to be done where the camera is solved for, and elements in the shot get rough geometry. Since this often needs to happen for the 2D VFX portion of a movie, this is considered some synergistic cost savings. The plate can be projected on that rough geometry in a 3D software package, and then an offset camera can be dialed for the interaxial and convergence values that feel right for the shot, and in the context of the sequence. That projection on to the hard geo is really just to dial things, the geo won’t be used raw, it’ll be cut by rotos, blurred in the depth channel, etc. to make something that’s softer than the hard facets of rough geo.

The photography does get rotoed, however only one eye needs to be rotoed, not a complicated matching pair. The photography also needs some degree of paint work to be done to it to clean up the area that was occluded by the foreground. This can be as simple as painting out a sliver, or halo, around where a character occludes the background, or it can be a more extensive affair.

That means the same paint needs to be used in both eyes to account for any minor variance between the paint and the original plate. While the audience could never tell that the background was painted (really, I absolutely promise you can’t tell because shit is painted all the time in regular-old-vanilla shots and people truly don’t have an inkling) the audience can tell if there’s only paint in one eye’s view for the same reason as it would be a problem to have mismatched paint in native stereo.

Paint removal includes things like flyaway hairs which will be painted out and rotoed, or luminance keyed, to bring them back. They will match exactly between the two eyes, unlike mismatched keys of native stereo, but they will need to be placed in depth.

If you want to tell anything about the quality of a conversion, look for those flyaway hairs. They should be there, and they should also be at a sensible depth relative to the rest of the hair. not way behind, or in front of the actor.

The actor should have internal depth, which is usually derived from the rough matchmove geometry. They should have a nose past their eyes, and their ears and neck should be back. They should never feel like a cardboard cut-out unless they are far away from the camera, like background actors, or a really wide shot of them in an environment.

Speaking of environments, the two biggest problems there are highly reflective and refractive surfaces. If there’s a shop window, with reflections, and the name of the shop painted on the glass, the reflections should not be at the depth we see through the window or they will look like they’re at the depth of the walls and surfaces inside the shop. They need to be at the depth of whatever they’re reflecting. That means the reflections must be painted out, along with the lettering on the glass. The lettering needs to then be placed over the shop interior at the depth of where the glass plane is on the facade, and then the reflections need to be added (reflections are additive, but that is a rant for another day). Then that reconstructed window is used for both the left and right eye. No, you won’t be able to tell that work was done because you, in the audience, don’t have the 2D version of the movie to look at and compare it to, and the work will all be so internally consistent that it wouldn’t register for you to check without the knowledge that this kind of work needed to happen. In the abstract this knowledge might cause philosophical conflict — unclean! Impure! But I assure you the director isn’t anywhere near as precious about this as you might think.

If a conversion house omits that level of work, and just lets the shop window be flat to the depth of the building facade, or lets everything in the window go deeper, including the reflections and lettering, then it’s going to look wrong to any casual viewer.

This can also be applied to things like shiny cars, or reflective bodies of water.

As for refraction, that will be most obvious with things like thick, curved glass, and glassware filled with liquids. Bottles, wine glasses, thick reading glasses, etc. The edges of the glass, where the index of refraction creates that defined shape that’s almost solid, should be at the depth of the glass in 3D space. The interior core of the glass, where you see through bent light of the objects behind it should be closer to the depth of the object behind it (accounting for any magnification). Then there should be an artful blend from that edge depth to that core depth in whatever fake depth channel is being used. Anything like reflections should be painted out and added on top; like the shop window.

What you do not want is for the glass object to feel like everything inside of it is at the depth of the glass surface. It will look painted on, not like you’re seeing through the glass.

This also goes for lens flares, which are reflections and refractions from light hitting the lens element at certain angles and then the filmback. The lens flare needs to be painted out and reconstructed exactly, then the source of the flare needs to be offset to match the location of the light source, and then the little bits of the lens flare need to be offset based on where the light source moved to in relation to the center of frame, which would be the center of the lens. Oftentimes a lens flare plugin in a compositing package will be used to help replicate the original flare, or at least used as a guide for placement.

This leaves other camera based effects like grain, and heavy vignetting. The entire plate needs to be degrained as step 0 in this process, and then regrained, taking into account any extra reconstruction work, and also offseting the grain timing for the offset eye. You should never have a stereo offset in your grain (meaning the same pattern reproduced and moved in X) because that puts the grain in depth. If you leave the original grain on your left and right eyes, and do your offsets, then your grain will be painted on to the depth of all the surfaces you reconstructed. That’s extremely bad, and extremely obvious when it happens.

Grain should be offset in time (effectively randomized noise seed) so there is never a matching pattern your brain will try to place in depth. The result is a fuzz that exists around screen depth. Your eye doesn’t identify it really having any depth unless the grain is heavy and can almost take on the quality of an atmosphere, that flattens things, in which case the decision maybe made to reduce grain for both left and right eyes.

Usually people can get away without treating vignetting, unless it’s heavy —the real artsy stuff. Then the conversion house needs to remove the original vignetting and add it at screen depth (no offset) with everything else in stereo being placed behind it. You don’t want something popping out through vignetting —that doesn’t make any sense.

The really good news is that because this uses the same greenscreen and bluescreen, the edges don’t get screwy, and any combination with CG can be made exactly, because everything can be placed together in the same shared space. When done well it help the director shoot how they’re comfortable shooting, and get the results they want for both 2D and 3D.


Really it doesn’t do any good to make any sweeping statements about quality based on method, and especially after taking into account that films will blend various parts in ways that are often invisible to you.

Idealogical purity really doesn’t exist in either the realm of home video or stereo video, so try not to get too wound up about it. Always try to watch the best version of something you can, and suits your current situation, but don’t get yourself upset about something in the abstract.

If you really want to understand the quality of the 3D work inspect those common problem spots I mentioned. Pause your movie and open and close your left and right eyes. Look at the refraction, the reflections, the flyaway hairs.

Separately, judge whether it was worth seeing it in 3D at all. Did that add to your experience for this particular film? Was anything about it essential, or memorable? People talk about the 3D of the Avatar movies because James Cameron made it a part of the experience, not just because the native stereo checkbox was ticked.

No one is under any obligation to like 3D movies whatsoever, but it’s important that we don’t justify or define that dislike based on a simple binary that isn’t true.

2024-02-19 17:50:00

Category: text

Vision Pro and the Challenge of 3D Movies ►

Last week the Accidental Tech Podcast talked about 3D movies in episode 573, and I sent in some feedback, which was mentioned in episode 574 this week. Part of that feedback was a member’s post I wrote for Six Colors last Summer when people were very excited about 3D movies on the Apple Vision Pro. That post is unlocked now, so I encourage anyone interested in posting opinions about 3D movies to read it and it’ll (hopefully) help people craft some more informed opinions.

I still see people use “real” or “fake” 3D to preemptively decide on the 3D quality of 3D movie. It feels very comfortable, in a First Principles kind of way. That’s a false comfort though, because nothing is that neat and tidy. If you won’t listen to me, listen to Academy of Motion Picture Arts and Sciences Todd Vaziri’s feedback in that same ATP 574 episode.

I also have a related post dissecting some Spatial Videos, with a video walkthrough, that is still behind the Six Colors membership paywall. I encourage people to subscribe for all the member benefits that have nothing to do with me, but I help out too.

2024-02-15 12:00:00

Category: text

Apple News You Can’t Use

a screenshot of an Apple survey about News+ asking the person taking the survey to be as specific as possible.

I received an email survey last week for Apple News+ and decided to throw Apple a bone by filling out the survey as best I could. Apple wanted my opinion, as a former Apple News+ subscriber, but I’m uncertain how receptive Apple is to the complaints that I have, as I have filled out similar surveys before. Also, the survey was weirdly focused on making sure I knew about audio versions of articles, and asking why wasn’t I using audio versions of articles? The answer, was that I have no personal interest in such a thing. I don’t begrudge anyone that, especially for increased accessibility, but I would appreciate more care in the news reading portion of the news reading app. Beware if you’re deeply invested in them because the whole survey might just be a pretense to scale back on them rather than actually fix anything.

I might as well repurpose my essay to the void on my essay-to-the-void blog, and include some of the screenshots I’ve collected.

Apple News+’s problems start with Apple News as an app. The page layout is both cramped, and light on all the relevant details. Headlines get awkwardly cropped, and the first impression of the app is always the front page of a newspaper where an editorial team has selected relevant stories for a mass market from a range of national publications. Severing the pieces from any wider context from those publications, but placing them next to each other to show Apple News is impartial.

This pseudo-objective viewpoint includes articles from outlets that I have blocked because I don’t consider the outlet impartial or worthy of my time or monetized clicks. The gray round rect telling me I blocked something, but still spelling out the headline, is entirely irrelevant to me, as an individual. That the round rect exists as a placeholder to satisfy the human doing the layout is utterly irrelevant and personally insulting.

I know that it’s possible to live in an information bubble from only consuming news from sources that I tend to ideologically agree with, but there isn’t a pretend middle ground where I am going to read Fox News. This is not a show me less, thumbs down, or sad face emoji this is blocking. If I banish something, never show it to me.

This happens in all the editorial, human-curated sections. Apple News, and thus News+’s, inability to properly surface things of interest to me, or speak with a genuine editorial voice (like my local newspaper) makes for a pretty shitty browsing experience. The LA Times in the LA Times app is better than the LA Times in Apple News, and anyone who thinks they’re getting the fullness of a publication from their single Apple News+ subscription is absolutely not. That includes things like layouts, special reporting, recipes, etc.

Additionally, ads are also part of the Apple News app layout, and the quality of the advertising is lowest common denominator bullshit that might as well be Taboola ads.

an ad for an obvious scam.
At least no one put an obvious scam right on the 'front page' of this newspaper.
a Yahoo ad for senior living.
Did Don Draper write that?

There’s a magazine reading experience buried in here, from the origins of Apple News+, that isn’t bad, but the app does so little to surface anything buried in those periodicals. I know because my most recent dalliance with Apple News+ was to subscribe for one month to read Tim Alberta let Chris Licht destroy himself. It was worth every penny, but then there was no foothold from that article to anything else, despite my best efforts to sift through the round rects.

The main interface relies on the internet-published articles from these magazines, which are more about securing clicks than actually being as interesting as the published magazine pieces.

To an untrained eye this might look like a mess, but clearly the enforcement of a rigid round rect system makes the whole thing readable, and all the articles and advertorials from The Penny Saver into tasteful treasures.

There are no real sections to navigate (other than “Sports” because just like an underfunded public high school, or anything else in American life, we always need to highlight sports first and foremost). You can just scroll forever in the app until maybe you wind up in a pool of articles that might be grouped together a certain way. I can choose to follow those sections by hitting “+”, but have no ability to navigate quickly to them, unlike a real newspaper, or my local newspaper’s app. The exception being favorites, which typically show up toward the top. In the “Following” tab you can find any of those sections you hit “+” on, but they’re separate from the favorites.

There’s no RSS support to follow topics from writers with blogs that I appreciate to allow me to aggregate what I want to read. Apple made a terrible tool for importing RSS feeds - I know, I used it for this very blog! And it is marvelously unloved by the Apple News team. Their preference is for a totally different markup that someone with a blog really isn’t going to spend time supporting.

That includes things like small-scale subscriber oriented publications that would never be included in News+ like Six Colors, which I contribute to, or to the wave of newsletters that have proliferated in last few years.

Perversely, those larger publications with staff to make the effort to be in News, and News+ seem to spend less effort on their articles. Apple News doesn’t appear to expend effort to suppress listicles, and other lazy writing. Publishers are seemingly rewarded for it based on the sheer volume of it in the interface. Take Merlin Mann’s struggle to find quality fashion articles.

If the thumbs up/down attached to articles does anything it is completely opaque to me. If the solution to me seeing too much lazy writing is to vote and see less of it, then consider showing me less of it! Someone should make sure those buttons are actually connected to something.

Occasionally, the entire interface is eclipsed by a single suggested article that’s nominally for you and you’ll wonder what miscommunication occurred to invite this into your life. Sometimes the suggestions are even from Siri, but not always, which raises questions around how the suggestions are derived, and why it’s necessary to occasionally brand them.

Siri "suggesting" I read an article from Fortune about AI's power and nuance is a little on the nose.
I don't know what I've done to give Apple News the impression that US Weekly is appealing, and I don't know if anyone human bothered to check if the article had a picture of Scarlet Johansson instead of Millie Bobby Brown.

Assuming I survive running the gauntlet that is the Apple News app and I want to share an article quickly, and easily with other people, I can’t even do that. Apple wants to hijack that relationship and would prefer I send News/News+ links. For Apple, the important thing is to grow the number of people in the News app, and grow News+ subscribers, but for me the important thing is that someone can read what I sent them to read.

The only real “advantage” of subscribing to News+ is that Apple stops bothering me to subscribe to News+. Suppressing solicitation is not a selling point.

2024-02-14 14:20:00

Category: text

Pick Your Poisonous App Store

The stuff going on with Apple’s iOS App Store and the European Union is going to be in the news for a good, long while, and I don’t think there’s any need to rush to have some hardline opinion about whether any of this is good, bad, goes too far, doesn’t go far enough, etc. What I do want to examine in this post are certain concerns that I have, not about safety, but about customer experience.

Apple will continue to employ a full throated campaign about how this is less safe, and introduces more risk. Scare sheets will continue to strike fear into the hearts of mortals. This app may kill you. All of which range from necessary, to numbing, to nuisance when they’re thrown in front of customers. Ironically, not so different from the Mac.

We have a wide variety of options when it comes to how we choose to pay for and download applications on the Mac. In fact, I would wager that we wouldn’t be celebrating the 40th anniversary of the Mac if people weren’t able to install apps, and run their own software, on top of an ever-changing set of system software that prioritized being able to transition old software to new Mac operating systems.

We’ve never had that in iOS, as customers, but the iPhone’s still been around for ~17 years, and ~16 years for the App Store. We’re all more than used to our walled garden, even though every customer has some opinions on what could be done to jazz up the ol’ garden.

The App Store is not a wholly negative construct. It does offer some degree of protection, and trust. I’m more likely to buy a small utility app when I know the transaction is safeguarded by Apple, for example. It even makes downloading, installing, and updating, pretty easy. Recurring subscriptions can be easily managed, which is useful when toggling on and off various streaming services without having to call a person that tries to talk you out of cancelling, or god forbid mail a letter to cancel.

The policies surrounding how Apple would like to have a cut of everything have made the experience of using the App Store worse for customers, just because companies don’t want to pay Apple a large percentage of their revenue, which means the way users interact with those apps sucks, and it’s degraded over time.

I’m a little uncertain how Apple’s changes to the EU iOS App Store will ultimately improve the customer experience, and maybe some of that was a conscious choice by Apple.

For example, there are going to be custom app stores, with proven lines of credit, which means only big players can enter the market. That’s good for customers because they’re less likely to be hit by fly-by-night operations.

However, That means the likes of Meta, Amazon, and Google, at the very least. Customers trade interacting with one tech giant for another. Cuts on certain app stores will be reduced, and certain things will be cheaper either across the board or in certain stores, but the tech giants will use their own apps, and web sites, to steer customers. They already do it in iOS.

Ever click on a link in the YouTube app and you get a modal pop-up asking if you want to open the link in Chrome for iOS? Or visit the Google home page and have it throw up a modal overlay asking you to try the Google iOS app? How about links in Meta products like Instagram, or Threads, that want to keep you in the Meta products to track those taps? Ever play that fun game where you try to buy a Kindle eBook and you’re bounced from the Amazon website to the app and then back to the website because they can’t sell you the eBook in the app? What if that just told you to use the Amazon App Store, oh and here’s a discount on your first X number of purchases?

A screenshot of Safari on iOS showing the Google homepage with a modal overlay asking the user to try the iOS Google app.
Just imagine this, but for app stores.

There’s going to be a huge land grab by every one of these already monstrous companies. Sure, there’s no other way to have competition without it, but it doesn’t foster competition where the concentration of power and wealth is distributed to a wider array of smaller developers. We’ll just get to pick our poison — or, more accurately, people in the EU will get to pick their poison. We get to envy their poison-picking from Freedom Fry Land.

It seems pretty obvious that Apple’s attempts to contain the concessions they’re making in various markets to just those markets is not in any way a permanent solution. I do hope they have a better plan this in mind for the rest of the globe, or at least are planning to learn some lessons from this rollout in the EU, because I don’t want everything I do on iOS to be all the big tech companies shouting MAKE US YOUR DEFAULT APP STORE in my face in the all the apps and websites that are, for a variety of reasons, indispensable.

2024-01-26 12:00:00

Category: text

Upgrade 496: 40th Anniversary of the Mac Draft ►

Obviously, this is a very entertaining podcast episode. I really enjoyed listening to it as a fun way to traverse the history of the Mac without being a dry encyclopedia entry. For fun, I’ve “played along” with my own answers below.

The First Mac We (I) Ever Bought

My mom’s first Mac was a used Mac Plus she got from my grandfather, with an HD20 external SCSI Hard Drive that had, you guessed it, 20 MB of disk space. It ran System 6.0.8 that was installed via 1 million floppy disks. I was a kid, so one of the few activities I had available to me was to mess up something and reinstall the OS.

It was my first Mac, but not the first one she bought or the first one I bought. It was underpowered for her work, so she bought a Macintosh Quadra (audience leans forward) 605 (audience is disappointed). The Quadra 605 was her computer, and the Mac Plus became our family computer. We would work on school papers in ClarisWorks 2.0 (or sometimes I would just spend time making various gradients because the black and white dithering was fascinating). Then the Quadra 605 became the family computer, and it limped along with a SupraExpress 33.6 modem, and AOL 2.5, into the internet age before being discarded for a Compaq Presario mini-tower thingy, and an old 486DX that my grandfather gave us. I bought an old Quadra 700 on eBay at one point thinking that I would… do something with it, and then a Performa 6-something because it was my first Mac with a CD-ROM drive, but it was ultimately a useless waste of money. Regrettably, I told my mom to donate them or something when she moved, which was incredibly short-sighted of me if you see the prices Quadras go for on eBay these days.

My real first Mac that I bought with my own adult money, to do real stuff was a brand-new MacBook Pro 15-inch in July of 2007, and I loved it. I still have that one, and fired it up this summer to write about Shake.

Favorite/Best Mac Model Ever

This is a tough category because I had far fewer Macs in my possession than any of the panelists on Upgrade. There were Macs that I wanted a lot — especially in the late 90s. The one that I really wanted was that very first G4 tower. The styling introduced with the G3 tower was great, and the first G4 was a pretty refined take on that. I thought that the later G4s got progressively sillier in appearance, even though they were more powerful, and the cheese grater G5s, and later Mac Pros were a little too serious compared to the flowing lines and materials of the G4. We even had a couple of those G4s in the yearbook computer lab, so I knew first-hand that they were pretty great, and thus, my favorite.

I think the best Mac I would have to go with the brand-new M3 MacBook Pro 16” which can just do everything. If price was no object, I would have one right now. It’s tough to pick between favorite and best here, but I’ll stick with best.

Favorite/Best Mac Software

This is where I was sniped by Gruber with Safari, and Dan with Terminal. I won’t cop out and pick one favorite, and one best, like I did above. Instead, I’m picking QuickTime 7 Pro. No, not QuickTime non-Pro, or QuickTime X, but genuine QuickTime 7 Pro. Accept no substitutes (because Apple never made one!)

It was a transformative media framework, application, editor —everything. QuickTime X never matched it, even though it has some nice stuff too, and a lot less brushed metal.

Favorite/Best Mac Accessory or Hardware Feature

AirPort Extreme! The flat fifth generation one (not the ugly, tall one) was a transformative product for me. Every piece of Wi-Fi equipment I’ve ever owned before, or since, has been a flaky piece of shit. It was so friendly, and easy to use. I could just plug in my printer and leave it very far away from me, where it belonged.

Hall of Shame

Obviously, I would have picked the butterfly keyboard, as Dan did, but I’ll go with my second choice: Apple buying and killing Shake. That is some very niche shame, especially compared to the keyboard, but it was software that I used for my actual job. They didn’t know what to do with it, and botched that, permanently changing the software landscape for the VFX industry. (Jim Gaffigan Voice: Wow, he really talks a lot about Shake. Is he OK?)

2024-01-24 18:15:00

Category: text

I Made This ►

John Siracusa has a good post on his blog summarizing some of the concerns regarding generative AI. There’s a lot of other kinds of “AI” in the news these days, but this is specifically about material wholly generated from text based prompts, using computer models trained on other images.

This question is at the emotional, ethical (and possibly legal) heart of the generative AI debate. I’m reminded of the well-known web comic in which one person hands something to another and says, “I made this.” The recipient accepts the item, saying “You made this?” The recipient then holds the item silently for a moment while the person who gave them the item departs. In the final frame of the comic, the recipient stands alone holding the item and says, “I made this.”

This comic resonates with people for many reasons. To me, the key is the second frame in which the recipient holds the item alone. It’s in that moment that possession of the item convinces the person that they own it. After all, they’re holding it. It’s theirs! And if they own it, and no one else is around, then they must have created it!

The act of creation is a tricky thing to pin down with people, because someone may not realize the ways in which they were influenced by something they saw before. It ought to, theoretically, be easier to pin down sources (the training data) a model uses, but the people making the popular models right now would really prefer if you didn’t.

If you feed a prompt into a model, you post the result as your own, and you get a cease and desist letter in the mail, then you suddenly flip-flop on who’s responsible for this image. Instead of you creating AI art with your carefully crafted prompt, the infringing work is the fault of an opaque data set you couldn’t possibly be held liable for.

It’s also not just copyright you have to worry about, but using bits and pieces of images from a dataset including CSAM, or other morally repugnant things you would not consciously include in your work. There are people who are looking at those things, for $2.20 an hour.

Do all the people that used Stable Diffusion during the time it had a tainted dataset need to post a disclaimer on the image they’re claiming ownership of? No, no one’s going to do that, because that’s not their fault.

The Gray Goo

The thing that really raises my alarms is how companies are training people to use generative AI to fill their social networks, publications, and sites with untraceable, generative AI images. In fact most of this post was originally part of a draft titled ‘Gray Goo’ I started in December when companies were bragging about their end of year AI progress, such as this boastful post from Meta.

In science fiction, the concept of gray goo exists to describe a hypothetical scenario in which self-replicating machinery outcompetes and replaces organic life. Generative AI models are not currently self-replicating, but we are feeding generative AI output into other algorithms, like image search results, or social network feeds.

If you look at the Explore tab on Instagram you have an algorithmic feed of thumbnails containing images. If you tap through into those you might notice that a few of the accounts posting the images are aggregators. They say things like “DM for credit or removal” as if the person who posted it didn’t know how they got their hands on the image in question.

These are usually topic based, like certain dog or cat breeds, desk setups, architecture, film photography, or whatever. The aggregators can also have links in their profile to shirts, or other merchandise or services, that they are selling to collect money for their hard work in reposting other people’s stuff.

Spend some time looking around and you’ll also see some Midjourney or Dall-E-looking images that may or may not have #AI on the post, or have already been laundered through one of those aggregators and contain no disclaimer or sourcing. (A word of warning, the #AI hashtag on Instagram is seemingly unmoderated and contains some anatomically questionable material. (No, I’m not just talking about fingers.))

The ultimate goal of these platforms is to have stuff, and not just any stuff, but filler between ads that they serve. People can see and mimic popular posts, and memes, on social platforms, but that still requires labor. Reduce the labor and you can increase the amount of new filler. That includes different variations on filler to fit new formats, like image tools to make horizontal aspect images vertical to increase Stories, or add animation to increase the amount of video filler, which means more video ads.

This also increases the number of people that wouldn’t feel guilty for uploading some untraceable generated stuff, because they are not knowingly copying anyone, like those aggregators are. They get the serotonin without the fuss, or the guilt.

So, it’s not just “I made this” of the user and their generated image, but “I made this” of the companies who want to sell ads against the endless stream of “user generated” images that they have safe harbor protections from. After all, if there is a problem with an image, there are existing moderation tools (that mostly don’t work, but whatever!) to deal with moderating things individual humans need to actively file objections to.

Consider why companies want you to use generative AI tools. It’s not altruism about democratizing expensive software.

Unique vs Ubiquitous

You can’t really put the genie back in the bottle here, but you can regulate the fuck out of that genie’s datasets, which will make it much less attractive to exploit people. Back to John:

In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act.

There is also the possibility that people will lose interest in ubiquitous prompt-based tools because the work produced may be less special, or unique. That might also lead to a shift away from prompts that generate the whole enchilada to more of the tools that assist in editing and image manipulation that’s more human-centric.

Art is often a reaction to current trends in art. Everyone is painting realistic stuff? Let’s paint surrealistic stuff! Photography is clean, and crisp, and digital? Here’s the grainiest film you’ve ever seen in your life!

If the datasets can’t absorb current trends, or changing tastes, then they can’t easily be used for sole authorship of unique works that ultimately need to appeal to humans, not models.

Art will, uh, find a way.

2024-01-11 16:45:00

Category: text

I Love Murderbot

Like I mentioned in a previous post, I’ve been trying to read more books instead of doomscrolling, and one of those books was the lauded All Systems Red, the first novella in The Murderbot Diaries book series by Martha Wells.

I was instantly hooked from the first paragraph and had trouble putting the book down to go to sleep. The protagonist, Murderbot, feels like a character that was written to specifically appeal to me. Its point of view is drenched in sarcasm, and it’s bitter, but completely resigned, to do its job — no matter how much it says it doesn’t want to actually do that.

“Can I ask you a question?” I never know how to answer this. Should I go with my first impulse, which is always “no” or just give in to the inevitable?

I know, how could something like that appeal to me?

The first novella is a complete story, so there’s no worry about a cliffhanger, or things being held for the next book. I was a little reluctant to commit to a series that was going to dole out stuff over many books, but that’s not what this is, it’s much more episodic. Which makes sense because there will be a TV series based on the books.

The novellas do give way to novels, but they’re still just as interesting, and don’t slow their pace to stretch out and fill the format.

Anyway, I read the full series in nine days, which isn’t a record for a voracious reader, but is a record for me. Then I wept, for I had no more Murderbot books to read.

2024-01-10 11:30:00

Category: text

Give Me Back the Watch Control Center

When Apple announced that swiping up in watchOS would bring up a new Smart Stack interface instead of the Control Center, I shrugged. I really didn’t think it would be a big deal. People complained about it in the beta, and it just seemed like the kind of thing that requires a certain period to adjust to and then we’d forget it used to be different. Boy was I ever wrong about that.

The hold-down-to-change-watch-faces thing lasted from September 10th to November 15th, so clearly that one didn’t go over well. Moving the Control Center to the side button persisted, without any preference or option to revert. Maybe fewer people were pissed off about it? Maybe management just believed in the usefulness of Smart Stack so much that they were unwilling to give up as easily?

However, last night, when I was on a plane flying to California from Florida, everything went into Sleep mode because the time was EDT, not PDT that we were barreling towards. My Face-ID-With-Mask-With-Watch-Assist stopped working and the phone told me that I would need to take the Watch out of Sleep mode. I did the thing that I have done for years, and years, and swiped up to … oh right, this is the Smart Stack.

Then there’s the delay to dismiss it, then hit the button, which requires the application of force, not just a swipe, and is more comfortable to do with thumb and index finger.

I was agitated by something that was arbitrarily made worse. I didn’t feel like a fool for doing the wrong thing, or blame myself in any way, because even if I pushed the button first I still would have been irked.

The Not-So-Smart Stack

Perhaps, what’s so frustrating to me is that the Smart Stack is so useless to me, specifically. I can’t speak for everyone else, but all of the times I’ve accidentally opened this treasure trove of irrelevance it’s displayed the day of the week, the month, the date, the goddamn time, and then a card that’s either a snippet of my almost entirely empty calendar, or the weather. All of this information (except my mostly empty calendar) is better laid out in my Modular watch face, using complications.

A screenshot of the Apple Watch Siri card stack. It shows the date, the time, and “Mon, Jan 8 No events today. Your day is clear”
The time and date on a watch? Groundbreaking.

I can add and remove the widgets/cards from the Smart Stack, just like the significantly more useful Smart Stack in iOS, but I can’t replace the useless date and time stuff. Also another thing that makes it’s iOS counterpart more useful is that it doesn’t reshuffle the cards, so I can remember which way to swipe on an iOS Smart Stack to get to other widgets.

Most importantly the widgets in the Smart Stack on my iOS Home Screen display ambient information akin to Watch complications display of ambient information. It’s in the interface I’m glancing at, not some other destination.

Naturally, this means I have no reason to open the thing. Ever. Under any circumstances. And yet, I somehow get both the swipe up, and the upwards swipe on the Digital Crown to get to it? It deserves two special gestures? Is it that beloved and adored?

I’m sure some people do find this useful if they prefer to use one of the watch faces with fewer complications, or smaller complications. It’s a computer watch but if you want it to feel like a Swatch Watch, go for it. The stuff in the Smart Stack will always be reorganized in some baffling way when you want to get to it, but you do you.

Having said that, give me the swipe back.

We did it for Natural Scrolling, we can do it for this.

Or hell, go ahead and make us sorry we wished for it back by making a Smart Stack widget that opens the Control Center. Make single serving Smart Stack widgets for Wi-Fi, Airplane Mode, Sleep — really make us sorry we complained!

I just don’t want to need to do something, and either accidentally swipe, or remember to press the button. Even a successful interaction is as annoying as a failure.

2024-01-08 19:30:00

Category: text