Tim Cook on set at Apple Park. Image courtesy of Apple.
I wasn’t going to say anything about the “Shot on iPhone” tag at the end of the Apple event yesterday on this blog. There’s nothing to really write about that isn’t covered by Apple’s own words. I made my snarky Apple Event jokes, like they should have shot it on a MacBook’s 1080p FaceTime camera, but that isn’t a complaint about using iPhones to shoot things, and I didn’t think there was anything to say.
Was I ever wrong!
The most popular post on The Verge, two days after the event, is a post insinuating that that Apple isn’t being genuine when it shared their behind the scenes images and videos of the event. The writer seemed to think they caught Apple:
It’s a neat way to promote the recording quality of iPhone cameras, but it’s not like everyday folks can recreate these kinds of results at home unless they happen to own a shedload of ludicrously expensive equipment. The gear shown in the “Scary Fast” behind-the-scenes footage is fairly standard for big studio productions, but Apple’s implication with these so-called “shot on iPhone” promotions is that anyone can do it if only they buy the newest iPhone.
If Apple said that the video was shot on an Alexa Mini at the end of the video, or a Sony FX-3, would the writer’s criticism be the same? I’d hazard to say no, it wouldn’t be. That the iPhone can be inserted into the same production workflow Apple would otherwise use for this event is absolutely a normal thing to laud.
Sure, there’s more to post production that I would love to find out about from Apple, but it’s the same as any of their other polished Apple Event videos. They didn’t deepfake Tim Cook and say the iPhone 15 Pro shot that.
People think lights and crews are cheating? Because there’s VFX in places, editing, color-timing? Of course there is? This is a commercial video. It’s not a benchmark of what you, a total novice, can shoot in your backyard. It’s a camera in lieu of another camera. It is not replacing the entire production pipeline of shooting video.
Some people see “Shot on iPhone” and people think it’s just some bumblefuck named Frank standing there all by himself holding the iPhone with his hand and shouting “action!”
In a way this seems so bizarrely evident, that it needs no explanation.
His video completely explains log, and what it’s used for. I assume people check out his stuff? I don’t know why I need to repeat it, which is why I don’t write about it. You follow Stu, right?
The ability to record to external media is why they can shoot for a significant amount of time, because media is huge. You don’t shoot your home movies that way. That’s not for the bad videos of concerts you upload to Instagram. It is, however, not a trick Apple has pulled on you.
Just to be extremely clear: The iPhone footage is still going to have the sensor and lens properties of an iPhone, so if someone wants a certain look, they might want to consider other options that have a flexible array of lenses. Again, not a trick, it’s literally all right there in your hands. That if you needed to “run and gun” in low light without the ability to set up lights, you’d want something with better low light sensitivity. If you wanted dreamy out of focus backgrounds, you’d want a big sensor and big glass. The iPhone won’t replace IMAX, etc.
That’s not up to Apple to make the iPhone 15 Pro serve every situation and every need. Confusing Apple shooting this event on the iPhone with theoretical scenarios not depicted in the video is unhelpful.
This whole kerfuffle is similar to something from only a couple months ago, where people got all worked up about The Creator being shot on the Sony FX-3. The camera, in and of itself, didn’t shoot that movie. The workflows enabled by having a smaller camera, were complemented by the nimble, resourceful team shooting the project. If someone ran out and bought a FX-3 they wouldn’t have The Creator any more than running out and buying an iPhone 15 Pro means you’re going to make an Apple video presentation by yourself.
It should, however, encourage people to think resourcefully about tools at their disposal, and the iPhone 15 Pro is another tool to consider, and potentially reject, based on your specific production needs, along with all other cameras. This is why people do camera tests, for crying out loud.
If you’re not someone creatively inclined to care about any of this, then I invite you to stop arguing on the internet about it like it’s an abstract, unknowable concept, or root for various sides like sports teams.
The range of default configurations for 15" and 16" Apple MacBooks.
Apple silicon is fantastic. Performance per watt is off the charts (actually it’s very much on Johny Srouji’s charts.) However, I keep putting off upgrading from Intel to an M chip because the cost is eye-watering, and the Apple laptop line-up didn’t seem particularly predictable in its release cadence or feature set.
The MacBook Pro has been refreshed three times so far, twice this year alone. The MacBook Pro 13” finally kicked the bucket, but prices didn’t really come down on any of the models with Pro chips, they still start at $1999, and $2499 for a 16” screen. We finally got the MacBook Air 15” this spring, but it’s only a low-end config. There’s no way to spec a MacBook Air with a M2 Pro or M3 Pro chip.
Why do I care about the “Pro” chip so much? Despite the name the Pro is really the middle chip, but there’s no middle laptop for it. The base M2 and M3 can be configured with more RAM (to a point) but they can’t be configured with extra ports, or even drive more than one external display. They’re not like pokey Centrino chips — they do have the ability to perform — but they are inflexible for certain workflows that require additional connectivity, like dual displays.
It’s pretty easy to argue that dual displays is a high-end feature, and thus demands a $1999 or more computer, but that wasn’t true of Apple’s Intel-based laptops. It has always felt like a regression to me since the introduction of the first M1 chips, and it’s not something apple wanted to correct in the M2 or M3.
I freely admit how important dual external displays are is colored by my need to use them, but then again, this is my blog, so deal with it or close the tab.
I need those displays for my work, and no my employer doesn’t pay for my machine or subsidize it in anyway. I connect to a workstation with remote desktop software and I do the actual performance crunching on that computer. My machine just needs to display things. If the base M chips could work with dual displays in clamshell mode it would be a no-brainer and I would get the 15” MacBook Air.
I have a 2018 15” MacBook Pro with 2.6 GHz 6-core Intel Core i7, crap Intel UHD Graphics 630, 16 GB of 2400 MHz DDR4 memory, and a 512 GB SSD. I bought that in 2018 for $2799. I have held on to this machine for an eternity in Apple years because I paid so much money for it. The 15” MacBook Pro I had before this was $1699 (I think, I can’t find the exact receipt) and that was similarly held for 4 years.
The battery performance on the 2018 is shot to hell, but the prospect of going into a store and spending money on the battery isn’t appealing. The revised butterfly keyboard has been mostly fine because I don’t generally use it as a laptop, but now the “o” key sometimes inserts “oo” which is also not a lot of fun.
The price equivalent model, the middle config, is the $2899 M3 Pro 12-core CPU, 18-core GPU, 36 GB of unified memory, 512 GB SSD. The one down, is $2499 and has half the RAM.
Doesn’t seem like we’ve made much headway with storage in 5 years! The build-to-order bump to 1 TB is $200 for either model, which would give me a $2699 or $3099 machine. I’m hovering at 153 GB of free space on my current drive, so despite my efforts at optimizing it does seem likely that I would cross the 512 GB barrier, or dedicate more of my time to file management, offline storage, and fighting cloud services to really please keep that file I used seven days ago.
While a lot has been written and said about how Apple’s chips are more efficient with RAM and storage than Intel, people don’t expect to buy exactly the same storage, or lower, than their current laptop. It’s not user upgradable, so it’s not the kind of thing one can try to use and just adjust later if they guessed 18 GB instead of 36 GB. You’re stuck.
Someone might be asking why I don’t consider the 14” MacBook Pro because it can be had for less than 16” models. It’s display is too small. I do really need a larger display for the times that I take my computer away from my desk. I do not, however, need it to be as beautiful and luxurious as the Liquid Retina XDR display. While that is definitely a wonderful screen to have, it is not essential.
What I had been really jazzed by, before it was announced, was the 15” MacBook Air. There were rumors it would ship with the M3 chip, so maybe that one chip revision would drive two displays? Naive, of course, for multiple reasons.
The 15” MacBook Air, configured with 16 GB of RAM and a 1 TB SSD is $1899, $800 less than a 16” MacBook Pro with similar storage. Without a M2 Pro or M3 Pro chip it’s not much of a discount, because I can’t use it for my work.
Maybe Apple would sell more 15” MacBook Airs if they had a more capable one?
Let’s ignore the 13” and 14” laptop models, because they muddy the pricing, and if someone is shopping for a large screen, they are not considering those. We really only have one configuration of 15” MacBook Air. They do sell two at the Apple Store, but the only difference is 256 GB (shame!) or 512 GB of storage. There is no other M-chip configuration at all.
That means there’s a price umbrella between $1499 (15” M2 MBA 8/512 GB) and $2499 (16” M3 Pro MBA 18/512 GB). A thousand dollars where the only thing that can fill that gap is custom RAM and SSD sizes, no chip variation at all. That money gets you a larger battery, a fan for cooling, a much better display, and more connectivity with HDMI, SD cards, and additional lightning ports. It’s not like it’s $0 for those, but if those are less important to a shopper then they might gravitate towards the Air, if only it had at least one config with a more capable chip.
Honestly, I understand that the Pro chips run hotter than the base M chips, so they need a fan, and there isn’t a fan in the current 15” MacBook Air, but can we under-clock it? Throttle it? Something? Can we turn off some more cores on the Pro chip? Look, I’m not Johny Sirouji, but there has to be some way to have an additional display driver in a laptop under $2499.
Changing the design to disable the internal display in clamshell mode, and use both external display drivers for external monitors, doesn’t seem to have any appeal to Apple, so why not disable a bunch of a cores and charge a bunch of money? You guys like money?
What the 15” MacBook Air is, as it exists, is a good low-end laptop with a middle of the pack price tag. The 16” M3 Pro MacBook Pro is a very good high-end laptop with a high-end price tag, and the 16” M3 Max MacBook Pro is a ludicrous laptop. There’s no real middle, and that price gap shows it.
Some people think the gap is covered by the 14” M3 Pro MacBook Pro, but again, I assert that people are not cross shopping screen sizes like this. We are unlikely to ever get a cheaper 16” MacBook Pro, unless they put the vanilla M3 in it, and I don’t see that solving anyone’s problems at all.
Eventually, my machine will no longer be able to serve it’s function, either from hardware failing over time, the OS not supporting it, or some accidental damage or loss. Entropy isn’t going to wait for me because I’m cheap.
If I absolutely needed to buy a Mac right now, it would probably be the 16” MacBook Pro with 18 GB of RAM and the upgraded 1 TB storage. Then I would hold on to that for 5-6 years to try and eek out every penny. If I don’t have to upgrade though, then I won’t. As beneficial as it would be, perhaps there’s some configuration around the corner that will be more balanced to my needs, or at the very least, maybe Apple will bump their storage above 512 GB.
Look, I don’t want to be a jerk, but I am going to be a little spirited in my criticism of the new Apple TV app. I don’t want to hurt the feelings of anyone working on the new app, but I doubt they read what I write about Apple TV. If they did, they wouldn’t be shipping this.
Unify media and apps into one interface, with the ability to pin favorite apps.
Reduce the amount of Apple TV+ promotion in the interface, particularly for non-subscribers.
Properly personalized recommendations based on viewing habits.
Handle live TV through a unified programming guide, like Amazon does, instead of pretending the only live TV is live sports.
The new interface miraculously resolves none of these things.
Upon upgrading to 17.2, and opening the TV app for the first time, you’re still treated to the same behavior I’ve bemoaned in the past where the TV defaults to showing the Apple TV+ view asking me to”Come back for new Apple Originals” with Jason Sudekis’ mustachioed face right next to it. There’s no more Ted Lasso, fellas. Let it go.
The slide carousel thing quickly flips through the veritable cornucopia of Apple TV+ originals. Under it is the Up Next on Apple TV+ with the last show I was in the process of watching before I let the subscription lapse. That’s all that’s visible there, so you might be forgiven for not knowing that you can swipe/tap down in the interface for it to reveal more pages of round rect tiles.
Apple should do what it does elsewhere and show the cropped off top edges of those round rects so people know they can scroll down. This is not just for Apple TV+ but applies to other Channels & Apps which share this same styling.
After being accosted by Apple TV+, you can get to the side bar either by tapping the “<” or “Menu” button, or by swiping all the way to the left. The bar shows the current user profile, but it can’t be changed there, that needs to be done in Control Center.
The side bar’s main contents are:
Search
Watch Now
Apple TV+
MLS Season Pass
Sports
Store
Library
It’s all pretty self explanatory and are analogous to what we’re used to from before 17.2, but in a vertical, auto-hiding bar instead of a horizontal pill thing. The big exception being the carve out for MLS Season Pass, which didn’t have it’s own spot, but because Apple would like people to pay them money they elevated it. A cynical person might suspect that a primary motivation for the vertical sidebar is to add more top-level Apple content than what would fit in the horizontal pill-thing.
None of those elements can be hidden, moved, or otherwise edited.
The lower portion of the bar was very interesting, when I first saw the screenshot posted by Sigmund Judge, but then I saw it in action, and said, “Oh no.”
Apple had a problem with overlap between Channels and Apps for many years. Mostly, the TV app interface had a row of Channels that were promoted that would contain things you were subscribed to as apps, like Paramount+ and the rest. Other anomalies too because the ones that had Channels often had apps that integrated with the TV app. That seems to be consolidated now. The Channels and Apps for me shows the apps I subscribe to that integrate video on demand offerings with the TV app. For me, that happens to be:
Disney+
Hulu
Max
Paramount+
PBS Video
Prime Video
This is not a comprehensive list of the apps and services that I use, of course, because Apple is still insisting on TV app integration. Even apps that integrate with the Apple TV app for Up Next (DirecTV) don’t appear here.
The side bar should work as a launcher for any app that doesn’t integrate with TV app, and this punitive measure of sending people to the Home Screen is not going to result in Netflix budging, or ever make any sense for YouTube. Particularly when developers, and users find out how limited this integration is.
It will also get less useful in my household as several of those supported channels will have their subscriptions lapse next month. Your mileage may vary depending on what your subscribed to.
If there is a Channels & Apps element that you don’t want to see, you can long-press on it to hide the element, a courtesy we don’t get for MLS Season Pass.
There is also no way to organize Channels & Apps in the side bar. Hopefully the services that you subscribe to are of alphabetical importance to you.
I’ll discuss my thoughts on Channels and Apps a little more, and then we’ll pivot back to “Up Next”.
Amazon Prime Video without the cruft, and also without most of Amazon Prime Video.
Let’s kick it off with Prime Video, specifically, because a lot of people have it. I would venture that a lot of people are also pretty unhappy with the Prime Video app’s interface. What if I told you that the C&A view of Prime Video didn’t really help a lot?
There’s the Carousel of promoted stuff, and then under it the Up Next row that is specific to the last titles the Apple TV app is aware that I watched in Prime Video (this is important if you watch Prime Video on non-Apple hardware, like an Amazon Fire TV device, or browser.) If there’s nothing in Up Next, it’ll show you the next row under it, Amazon Originals: Highlights. Then several rows of Originals for Movies, Series, Kids, before getting to “Movies” with licensed content, etc. The bottom is “Sports” with those tiles for live games.
Curiously, the popularity-based rows are missing, and if you’re at all familiar with the Prime Video app this will seem positively spartan.
None of this is personalized for me in any way like it is in the Amazon app. These rows are for some generic human that could be anyone else using the service in the United States.
Similar de-personalized blocks of round rects can be found under the other Channels & Apps, but some do have the popularity rows and some don’t. They’re all very utilitarian, and depersonalized.
Since all of the companies with a presence in C&A also build their own apps, and all of those apps are much more personally relevant to users, and also more relevant to the companies’ own interests, I fail to see how a spreadsheet of general audience promos is going to steer consumer behavior, or budge any corporation into putting in effort here.
While a lot of people decry the buggy, laggy third party apps, and their un-Apple-like interfaces, this the total opposite, shoehorning generic stuff into generic rows of generic boxes. In the abstract it sounds simpler, but it’s also at the expense of usefulness.
Something like this interface is quietly buried in the current version of the TV app. On the Watch Now page scroll all the way down to “Streaming Apps” and click on any of those app tiles for an app you have installed, like Prime Video, and you’ll see what I’m talking about without having to install a beta.
The Watch Now interface is largely unchanged in the current dev beta from what’s shipping. That’s not encouraging, because Watch Now is so overrun with Apple TV+ promotion that it is virtually useless to someone that isn’t an active Apple TV+ subscriber. The most useful part of the whole interface is the Up Next row, which still shows you what you were in the process of watching, and displaying new episodes of a series you were watching.
Other than checking in on the TV app to see if Apple’s changed anything, I haven’t used the TV app much at all over the last year. Instead, I’ve optimized things so I can get to Up Next when I want to without having to open the app at all. Fortunately, those settings still work just fine in the developer beta. The settings for anyone interested:
Move the TV app to the Top Shelf (top row) of the Home Screen interface, if it isn’t there already.
Settings -> Apps -> App Settings -> TV -> Up Next Display -> Poster Art (helps to avoid spoilers in preview images)
Settings -> Apps -> App Settings -> TV -> Top Shelf -> Up Next (What to Watch is just a video billboard for Apple TV+)
Back on the home screen, when hovering over the TV app icon in the Top Shelf, you’ll now have direct access to your Up Next shows.
To remove auto-playing videos as well, you have to go to Settings -> Accessibility -> Motion -> Auto-Play Video Previews.
I suppose I should thank someone on the TV dev team for leaving these workarounds in place, but I’m frustrated that I still feel compelled to use them. That What to Watch is so optimized to juice conversions, and boost hours in Apple TV+ undercuts it being used for anything else.
It’s great if there’s someone that only wants to watch exactly what’s in that carousel, but even those people probably don’t want to see the same shows repeatedly advertised to them because there’s no personalization in what’s recommended.
Recommendations still live 20,000 leagues under the Apple TV+ shows. Long-pressing for options still only reveals “Go to Show” and “Add to Up Next” with no option to mark a show, or movie, as watched, or to tell Apple that you never want Real Time With Bill Maher recommended to you.
Frankly, given the lack of personalization everywhere else in this interface it’s baffling to take what little of it there is and hide it waaaaaaay down here. Especially if Apple ever does anything with the user profiles where content recommendations could be seen to visibly change based on the profile being used. No one has any hope of knowing something way down there changed when the profile was toggled. Is someone supposed to use this or not?
I know Apple has a research team, and they’ve surely conducted focus groups, and surveys about the Apple TV, and the Apple TV app. I also know that I’m not the only person complaining about this stuff. I might be pickier than most, but I don’t hear people clamoring for more carousels of generic feature films and TV shows.
As much as I would like Apple to unify this interface I’m content to leave it separate, because I’d rather hopscotch around all my streaming apps then go bobbing in and out of the TV app for some things, and the Home Screen for other things. It is preposterous that Apple thinks this problem was solved by switching from a horizontal pill, to a vertical sidebar so they could show the barest handful of apps, with the barest handful of those apps films and shows, not directed at the user in any kind of way at all.
Apple can’t ignore Netflix. It is not going anywhere. It can’t ignore YouTube, or non-sports live TV. It can’t pretend that the universe revolves around Apple TV+ and consider all other streamers as ancillary add-ons. These were problems before the app was redesigned, and they all remain problems in this developer beta. What are we trying to fix here?
Jason Snell and Julia Alexander had another great discussion, of their great podcast, and it just so happens to be about the same stuff I was talking about in my post yesterday.
This morning, two emails came in from Apple. The first was about the upcoming renewal of my yearly subscription to Disney+ and the second email was about the price increase of my Disney+ subscription from $79.99 to $109.99. It took me all of 10 seconds to think about it, but I canceled my subscription. It’s not really about the $30 price increase, which is outrageous, but mostly because there won’t be 12 months of TV show and movies to watch, in no small part thanks to the actions of Disney’s CEO.
I’m affected by the ongoing strikes, where the studios have once again walked away from the negotiating table. Almost all entertainment production has ground to a halt. The recent resolution of the WGA strike was celebrated, but it was a long, and painful process. All of our hopes, as workers in this industry, were pinned on the SAG-AFTRA dispute coming to a close. Instead, the studios decided to do what they did months ago in the WGA strike and take their case to the Hollywood trade publications owned by Penske Media in the hopes that they could gin up dissent in the union to push the union negotiators. It didn’t work with the WGA membership, and I don’t think anyone other than studio brass expects it to work on SAG-AFTRA either. There can be no resolution external to negotiation, so this just delays everything.
Films and shows that were slated to come out this year have been spread out, some pushing to next year. You can only spread things out so far, and that is particularly true of all the studio streamers that got into this game late.
When Disney+ first premiered it was criticized for only having The Mandalorian and WandaVision which were both over in the blink of an eye. Marvel, for me, has had diminishing returns since then, and I would not subscribe to Disney+ to exclusively watch anything from Marvel at this point.
Star Wars, unfortunately, is also on the bubble with several disappointments, including the most recent Star Wars: Ahsoka where I watched 50 minutes of space whales doing space whale stuff. Stretching Star Wars: Skeleton Key and Star Wars: The Acolyte out into next year, with both unlikely to be as inspired as Star Wars: Andor doesn’t convince me that $109.99 needs to leave my bank account right this very minute. If anything I can always dip in if it’s good, or wait until it’s over and binge. A year of, basically nothing, isn’t a year I need to pay for.
Disney, and everyone else, sold streaming licensing rights to Netflix for many years. It was like selling the broadcast rights for a film to appear on an old-fashioned TV channel. Audiences were trained to expect streaming media. Bob Iger, infamously, compared this to arms dealing, in a metaphor that no one really thought he should be making. Then Disney’s contract with Netflix ended and they went direct to consumer with Disney+.
I still remember the live-streamed shareholder meeting where Disney+ was first announced and Christine McCarthy (former CFO) outlined how the company’s low introductory price, and small number of territories, meant that it would be unprofitable for many, many years. Disney expected the stock market to treat Disney like they treated Netflix, which at the time was operating on a growth model. Everything was upended by the pandemic halting production, starving new services of new entertainment, and then by the Netflix Correction, where interest rates were up and shareholders wanted to see revenue off those subscribers, not just growth. That was not something Disney was prepared for, at any level of the company.
Bob Iger’s plans for Disney+ had blown up spectacularly and his hand-picked successor was ousted, by himself, so he could fix his own plans. Unfortunately, he’s been one of the people chiefly responsible for the WGA and SAG-AFTRA issues this year. A lot of the assumptions based on the finances of streaming have to do with taking advantage of labor contracts that didn’t cover streaming well. The AI likeness issue is a huge deal for Disney, not just the money, and they would very much like to treat actors like ageless property that can be applied to their sprawling connected universes.
Neither the writers, nor the actors, called for boycotts of the streaming services. They want to work, and they want to be paid, which means people need to pay the studios. That’s the how the system is supposed to work.
Hell, I have absolutely nothing to gain by people canceling their subscriptions. The only way I get paid is by the studios hiring VFX houses. Actors and writers don’t pay me.
Having said that, I absolutely do not need to pay studios a yearly subscription for 2024. They haven’t earned that from me on a personal level. Also none of these streaming price hikes translate into any benefit for me, the pressure is still on cutting costs in producing content.
What services every person choses to pay for are deeply personal, based on all kinds of factors, so there will be people for whom Disney+ is still essential, even without new content. Or they might be willing to step down to an ad-supported plan (I’m not). Disney+ does have an extensive library.
Same goes for Warner Brothers Discovery’s Max, which has been rightfully criticized along the way because of David Zaslav’s bumbling, and attempts to stuff the service with lower-cost unscripted fare from Discovery.
Smaller streamers are faced with these same problems, because they did what the big studios did, but they don’t have the libraries, IP, cash, or really anything to compete with the big studios. That’s why they’re small studios. It’s not that hard to understand, but it’s why Paramount+ following along has really hurt Paramount Global. While Comcast is huge, the part that’s a studio, isn’t (NBCUniversal).
However, things get a little more interesting with Netflix and Prime Video. My opinion of these services is not especially high. In the past month, Prime Video announced it was going to include ads in its previously ad-free entertainment, which was already intermingled with FreeVee ad-supported content. An extra recurring subscription can be paid to watch without ads. Amazon keeps tacking on all these fees onto a product that was supposed to be an all-inclusive affair. Like, that’s supposed to be their whole thing, you know? Their shows are generally expensive, and seemingly don’t do much to motivate any meaningful internal metrics for Amazon. It still pays to license some high quality TV shows and movies from others. That’s really its core value, that I’m not sure Amazon entirely understands.
Netflix’s mediocre filler piles up more and more every day. Like a machine that only knows how to spit out half-packaged products. It’s biggest asset is its size. I wouldn’t say it’s too big to fail, but it’s too ubiquitous to ever fail quickly. Which is why they can continue to generally fuck up a lot, and do stupid shit. Shareholders have calmed down since The Netflix Correction, and they’re generally content. There are no concerns about Netflix ever ratcheting up it’s fees too high, the concerns are about the slow growth of their ad-supported tier to bring in more price-conscious consumers.
The most recent stumble for Netflix has been in kid-focused animated entertainment. Their history was mostly with buying up stuff from outside companies (Mitchells vs the Machines is frequently cited as a Netflix film, but it’s a Sony Pictures Animation film that Sony sold off in the pandemic), until they started their own internal division. Then last week they announced they were killing projects and cutting staff. Then this week they announced they were going to be taking over Apple’s end of their deal with Skydance Animation. The animation company owned by nepo baby David Ellison, and headed up by the disgraced John Lasseter. David proved to the world that it was worth it to hire John by releasing Luck, a film that at best, was fine but not a critical or financial success for Apple.
Netflix’s flailing with making their own films and TV shows kind of doesn’t matter because they’re so big that they can turn licensed library material into a phenomenon. The most recent example being the craze over Suits which originally aired on USA Network, owned by Comcast Universal. Comcast Universal can’t turn anything into a success because it entered the market late with Peacock, and no one could think of a reason to subscribe to it if they weren’t already a Comcast customer.
This ability to turn something that could be a small success, and large financial loss, into a big hit by virtue of its ubiquity is what Netflix does best. To circle back to animation, this is also why the plug was pulled on the Paramount+ and Nickelodeon show Star Trek: Prodigy and it found a home on Netflix. Paramount Global will get money for something that wasn’t working for them, because Paramount+ isn’t ubiquitous.
Netlfix has also licensed a bunch of HBO shows, which is hugely damaging to Max, because it trains customers to expect HBO shows eventually coming to Netflix, but does help Netflix with their quality problem.
So Netflix increasing their prices to nearly cable-package prices? It’s not something I like at all, but it’s working for them because they’re basically a cable package of licensed stuff again, like they were before the studios panicked and tried to start their own streamers.
They can weather anything 2024 throws at them because there will always be studios desperate enough to sell shows to them, even if that excludes Disney. The SAG-AFTRA dispute could go on for another year and they’d buy their way through it. As repulsive as that thought is, it’s true.
There is, unfortunately, the very real possibility that nothing gets resolved before the winter holidays — or even if it does, productions won’t start ramping up until next January or February. When it resumes, it’s not like turning on a light switch. The entertainment industry works with overlapping schedules. This production starts while this one wraps, this is shooting while this is in post. You just can’t do everything in parallel all at the same time. That means we’re also going to get a trickle of finished TV shows and films. Even lowering the standards, and “putting a rush” on it doesn’t bypass everyone else doing exactly the same thing.
That means that even if people aren’t anticipating the drop in stuff to watch, they will eventually realize that there’s not much new. It’s going to be much harder to get those people to resubscribe, at high prices, than it was to get them to subscribe the first time.
I’m not sure what’s going to get me to reactivate Disney+, maybe it’ll be whenever the second season of Andor comes out towards the end of 2024 or early 2025, but it’ll just be the two months the show runs, and then I’m out again like the Falcon leaving the Death Star. Disney has lost yearly subscription rights in this household. A brand is a promise and the only promise I’m interested in from Disney is a single TV show, on a calendar far, far away.
Jason Snell writing for Macworld (but also on Six Colors if you’re a subscriber) writes about recently having to set up all his stuff from scratch on the Mac, and it’s a total pain in the ass.
Every time I opened an app on my Mac after starting from scratch or migrating or installing a major OS update, I was barraged with security warnings. This is because Mac apps can’t do much of anything (outside a very constrained sandbox) unless they ask the user for permission. So, if an app wants to read files on my Desktop, there’s a permission request. Documents folder? Another permission request? Use my microphone or video camera? Permission request. Reading random files and folders? Reading the disk? Using accessibility features? Using automation? Yep, yep, yep.
I don’t want to do what Jason’s had to do. I use very few applications, but I can’t even remember what the hell I’ve agreed to over the years. I have to deal with enough of this bologna just doing stuff on a system that’s not a scratch setup. Just yesterday, I downloaded NetNewsWire to test the RSS feed of this site and got the usual warning:
I paused, and I read that a few times to make sure I was comprehending the warning. I was warned that the application was downloaded from the internet (I downloaded) and asked “Are you sure you want to open it?” because I had double-clicked on it to open it. Both of those things were definitely true, so what does the little gray text mean? Oh, it wants to tell me the time it was downloaded by Safari, which I guess I could put in my personal journal, but most importantly that Apple checked it for malicious software and none was detected.
Are you sure you wanted to do the thing that you told the computer to do even though it’s safe?!
I’m sure, that there are circumstances where someone could be tricked into doing all of these things, but even if I was tricked into doing it, doesn’t the fact that there’s no malicious code negate the warning? Shouldn’t I be prompted only if something seems suspicious, and not just because I’m, you know, using a computer? If I agree to open it and it has malicious code Apple didn’t detect then I guess that would be bad, but wouldn’t we all have bigger problems if that were the case?
It’s such a small thing, especially when compared to the volume of dialogs Jason’s dealing with, but it’s the kind of thing that just happens out of nowhere when I’m using a computer and derails my thought process for a minute. Minor inconveniences do add up over time, especially when you have to repeat all those inconveniences because you have to rebuild your digital life in a matter of days.
If you haven’t been following my constant complaining on Mastodon you might be wondering why your RSS feed reset the read count on all the posts. I recently undertook a lot of technical work on this site that I had been putting off, the side effect being whatever happened to the RSS. It’s fine now, I think? Probably?
This blog was set up when I had a lot of free time on my hands between jobs, as the film VFX work in Los Angeles really dried up, and before the push for film-quality VFX work in TV. While I was doing all that TV VFX work, the technical underpinnings of this site just kind of sat there. I never could figure out how to configure certificates to get https working, but I excused it as unimportant because it’s not like people were doing banking on this site.
For the past nine years I would just write when I had time, and if the site needed maintenance then I couldn’t write. It just needed to keep barely being sufficient so it wouldn’t be in my way.
The site was also written in Python 2.7, which I was familiar with from it’s use as a scripting language in VFX software packages. Migrating to Python 3 hasn’t been easy for anyone, and the rewards for migrating are … well. Someone can let me know what they are later. The biggest issue was that all the third-party modules I relied on to generate this static site required different arguments and produced different results than they did in the 2.7 branch of the same software.
Lastly, the virtual server this site is/was hosted on, used a 32-bit install of Ubuntu, which Ubuntu stopped updating, and there is no migration path from a 32-bit install of Ubuntu to a 64-bit install. It would be unsafe to continue running and hosting stuff connected to the internet, especially as time went on.
That’s why I started up a new server instance last week, configured Caddy (which wouldn’t even install on the old 32-bit server) to serve https:// crap, and then proceeded to rewrite the static blogging script until it produced almost identical output (if you diff the html pages there are some formatting anomalies, like html entities vs unicode entities, but the rendered result is the same). I also tweaked some CSS stuff. I’m a fan of solarized, which gave that yellowish background color to the site, but my friend Dan whined about it so it’s white now. Happy, Dan? Of course you aren’t.
I’ve also added template logic for Open Graph images, so if I provide a path to an image, in the multimarkdown metadata on a post, it will display the image when shared with anything that uses Open Graph to render previews. Old posts will still show that shrug thing.
I’m hopeful that I’ve reduced the friction to post on the site, and friction for people to read the site (I know the scary security warnings in the web browser title bar certainly alarmed people visiting the old server, even though it was harmless). If you do experience a problem after today, please let me know, as I might not be aware of how your particular feed reader has parsed my site. I am deeply uninterested in moving anything for another nine years.
I truly don’t know what’s going on over at Amazon these days. In the past couple of weeks they announced a barrage of hardware and software that spanned everything from updated FireTV sticks that only have a $10 price difference between models (why?), to security camera products for both of Amazon’s security camera product lines, to a new Hub that can do Show stuff but Shows can’t do Hub stuff, to a $600 router, and stealing Panos Panay from Microsoft. Alexa, hasn’t had a meaningful new feature in a while, but now it’ll have a newer, more natural voice, and will use everyone’s favorite buzzword, AI in the form of a large language model that will slowly roll out to make sure it doesn’t hallucinate too much.
On the services side, Amazon is also increasing the additional monthly fees they can tack on to Amazon Prime, and devaluing Prime Video further. Previously, they caused confusion and diluted the Prime Video “brand” and app with the insertion of IMdB TV, which was renamed FreeVee. Selecting one of those titles showed ads, even though Prime Video titles did not (other than self-promo pre-roll ads that all VODs show). Now advertising comes for us all, unless Prime subscribers pay an additional $2.99 a month. Alexa Guard, a feature where Amazon listens for disturbances in your home while you are away (smoke detectors, the sound of breaking glass, etc.) has been retired so that it’s features could be added to Alexa Emergency Assist, which will be an additional $5.99 a month fee.
In many ways, Amazon is the anti-Apple. Instead of steady (if glacial) progress, there are these bursts of activity that don’t feel especially focused or cohesive. Like the aforementioned FireTV sticks, which compete with each other ($10 price difference!) in addition to competing with Roku. Also they get software features for a new Up Next row that won’t launch until some later point, and the more expensive of the two can turn your TV into an ambient smart display that only the “high end” Amazon-branded TV sets could do before.
A large part of my skepticism about the products is from my negative experiences using Amazon products in recent years. They used to be inexpensive, not particularly beautiful, while still being functional. Often performing features that Apple charged a premium for, like having 4K HDR video under $100, with a voice assistant, and universal search. They even do things Apple can’t do, like aggregate live TV programming guides from disparate apps into a single OS-level interface for navigating “what’s on”.
They have had Echo Show, and tablets with an Echo Show mode, for years while Apple just launched their Stand By mode only for iPhones in a horizontal position on a MagSafe dock as their first foray into ambient (if pretty limited) computing.
However, the Echo Show tablet I have will not stay connected to the Eero WiFi, and not a single FireOS update has fixed that. All the updates have done is push more junk into the thing when it is connected to the network. The writhing mass of growth-hacking bottom-feeders (the product managers) at Amazon just keep adding new categories to the Echo Show that need to be opted out of. Like every brainless marketing bro who just ads a new opt-out newsletter to hit the email inboxes of everyone who opted out of the last newsletter, and the one before that. Lazy, invasive ads that you can’t pay extra to remove.
There is a distinction, of course, between paid advertisements (where a third party has paid Amazon to promote something), first party advertisements (enticing someone to purchase or subscribe to something through Amazon), and feature announcements (Alexa can now do handstands, just ask Alexa to find out how). To most people those are unwanted ads, even in cases where a product manager argues they’re helping by surfacing products that might be relevant, or increasing awareness of functionality, or TV content. I certainly haven’t found that to be the case recently.
Last month I gave up on trying to reconnect it to the WiFi because what’s the point? Seeing the weather and time is no longer worth the frustration of this spam, nor is talking to Alexa in its current state. Here’s Dan Seifert, of The Verge, on Mastodon:
There’s nothing enticing about the prospect of buying any of those newly announced ambient displays precisely for that reason. It’ll just be paying for that same experience (although hopefully with a more reliable WiFi stack) and that’s not a “deal”.
I stopped using my FireTV Stick altogether. Only occasionally plugging it in to check on the live TV updates, which are genuinely interesting in the field of smart TVs. The day-to-day experience of using the device has gone from functional to just bad. It really came down to the changes Amazon made to their home screen, to once again, shove garbage there. While Apple TV’s TV app has the same issue with sacrificing user experience to growth hack Apple TV+ subs, the TV app can be wholly bypassed in a way the FireTV’s home screen can’t. A new quad core chip for the FireTV 4K Stick isn’t going to lure me back. Faster trash? No thanks!
The changes to Alexa are, once again, something I’m deeply skeptical of. I just want to set timers and ask for the weather. The simple things in life. The move to push “Did you know I can…” and “Would you like me to…?” responses after every query has made both myself - and my boyfriend - use Siri on our iPhones instead. Not because we deeply love and respect Siri, but just because we don’t want to have to yell “NOOOO” at Alexa after every request for a simple task.
I have never been particularly alarmed by how invasive the devices are, but the inconvenience — the abject shittiness of it — that’s the issue for me.
There’s a pretty invasive feature to map your smart home, which would be a useful, and novel thing if Apple did it. And not something that made me feel like I was just scanning my house to help Amazon.
Even during these actual announcements, Janko Rottgers published a piece about rumored projection-mapping hardware from a startup Amazon acquired that could leverage your mapped house for some neat projection stuff. However, it just made me think of College football ads projected on the walls, or suggestions to buy some piece of e-waste that’s a combination cable adapter and unregulated battery pack.
There’s never been any taste at Amazon, but there’s also never been this much thirstiness either. Treating all the customers like we’re Clockwork-Oranged to anything they put in front of us is really assuming a lot about what people will put up with.
And again, to contrast that with Apple, it’s not like Apple has radically improved their approach to smart home stuff over time. They built that whole life-size dollhouse set to show off connectivity in a home, and still no one really lives like that. There are no real control hubs, or first-party devices beyond expensive speakers with humidity sensors inside. Errors in the Home app are still frustratingly cryptic and managing the whole thing is nearly opaque.
However, does Apple really need to try harder or do better as Amazon self-sabotages? Will there ever be a turning point where the low-low prices aren’t of value? Will Panos Panay be able to shift any of this, or just make more attractive packaging for it? Will the people buying these devices on deep-discount and shoving them in the back of their closet, or pawning them off as burdensome gifts, change to something where there’s a desire for the new-ness over the deep-discount?
We might get a true smart hub from Apple in a couple years at this pace, and maybe there will be a generation of HomePods that isn’t just great sound, but a great Siri experience. God knows when we’ll ever get a great Siri experience, but that’s gotta happen some day, right? Maybe in another millennium?
That probably won’t ever replace Amazon because of the sky-high prices, but at least it can be a more viable alternative. And hopefully, there are people inside of Apple that recognize how encrusted with distraction these Amazon devices are and can lobby for less of that (services ads in the system, App Store search bullshit, etc.) in their own products.
For now, I’m not investing in any additional smart garbage. I waited for Matter, and nothing about that lived up to the sales pitch. Investing in anything else feels like throwing away more money. I’ll let people tinker away with their Raspberry Pi’s and Home Bridge, and all that crap. I’ll limp along with this rag-tag fleet of smart plugs, bulbs, thermometers, and cameras until we’re on the other side of whatever this current era is. Where I trust both intention and function.
On the most recent episode of Accidental Tech Podcast, Marco Arment described a recent ordeal with a moving truck. He recounted his list of grievances from using a rented box truck in New York, and mentioned the issue of roads that trucks aren’t allowed on, or won’t fit on. New York parkways (park on the drive way, yeah, yeah we all know the joke) apparently don’t allow trucks. The major mapping applications he mentions - Google Maps, Apple Maps, Waze - don’t have a feature to avoid roadways that don’t allow trucks, as they do for avoiding tolls.
He used two crappy apps that offered truck routing, but both were poor quality and he lamented that this isn’t just a feature of Apple Maps, Google Maps, or Waze.
Then John Siracusa joked about how this is because Apple’s based in California. “I don’t even know if they have toll roads in California. If you live in California please don’t tell me you don’t have toll roads, I don’t want to know.”
We have freeways which have free in the name because the West Coast was very pro-car after WWII. There are very few toll roads, because then they wouldn’t be free ways, and the unfortunate use of free influences a lot of our infrastructure. With only High Occupancy Vehicle lanes, and HOV stickers for some EVs to try and shape all that free, bumper to bumper traffic.
A quirk of this is the conversion of some of those HOV lanes into a number of toll lanes, referred to in some places as High Occupancy Toll (HOT), express lanes, or managed lanes. Several of those are great ways to dodge the word toll. There are different regional agencies that operate under the toll collection brand of FasTrak.
FasTrak lets everyone have their cake and eat it too. The freeway stays “free” but the HOV lane is usually expanded and repurposed to allow for vehicles with transponders to use the lane, and be charged a dynamic, demand-based fee. Depending on the time of day, and occupancy of the vehicle, it could be $0. Buses also use these lanes, and even have very bus-rider-hostile transportation centers shoved in the middle with their own entrances and exits that cars should not enter or exit.
However, this means there are now lanes of fast moving traffic on the inside of a freeway that need to enter and exit past all the people not using these lanes. Also some of these older freeways have a lot built up around them so they can’t always expand their lanes, which means some sections are elevated above the regular traffic. To enter, or exit, the lanes you must either be in a section of the freeway where all lanes are at the same elevation, and a break in the lanes allows for merging, or there must be a grade-separated flyover to allow direct entry and exit from the lanes.
All of this stuff is visible in Apple Maps, and Google Maps. You can even see these complex interchanges in 3D in Apple Maps, or tap right along the route in Google Street View. Marvel at the volume of data collected about, but not applied to, this problem.
That is the 110-105 interchange in Los Angeles, which is a good example of every kind of lane connection possible. It is a monstrosity, but we’re not debating whether or not it should exist, it’s there, and it’s been there since the 90s, with the HOV/busway converted to FasTrak in 2012.
Now it’s time to engage in every Californians favorite pass time, talking about roads that connect to other roads. If someone is heading South on the 110 to travel from Pasadena, East Los Angeles, or Downtown Los Angeles they can enter FasTrak lanes just south of Downtown. Those two left lanes then travel in several elevated sections above the rest of the 110 traffic without any flyovers, and only a couple spots where the elevation changes to allow for merging in or out. When the 110 meets the 105 the FasTrak lane splits in two, with one lane for the 105 heading toward LAX, and the other continuing to other overpasses to go East to Norwalk, or continue South towards San Pedro.
Let’s say, theoretically, that you’re by yourself, in a car with a FasTrak transponder heading from Downtown LA to LAX, which means taking the 110 S FasTrak lane. You enter the FasTrak lane, and receive no objections from Google. Then it tells you to exit right to get on the 105 W, but the signs say to stay in the leftmost lane for the 105 W and LAX. You dutifully ignore your navigation app, and it tries to urgently reroute you as you arc across every other possible lane of transportation layered in that spot. The apps assume that you have logically fallen hundreds of feet through the air to one of the many other roads, Blues Brothers style, and then fallen over, and over again.
It eventually stops freaking out once the flyover has fully merged with the 105 W, because you’re oriented with the 105 W, but that lane is a HOV 2+ lane, because the 105 is not FasTrak. There are signs that “FasTrak must exit” so your single-occupancy vehicle needs to do that immediately. That’s not a big deal, because you should be reading traffic signs, but it should be something a navigation app can remind you off, just as it reminds you of any other lane change.
If you are traveling in the reverse direction, eastbound from LAX and northbound to Downtown LA, then your single occupancy vehicle will be in the flow of regular traffic and you will be directed to the right lane to exit for the 110 N. Unless you ignore the navigation, read the signs, and enter the HOV 2+ lanes to use the flyover from the leftmost lane. Of course attempts will be made to reroute you as you get a lovely view of the Downtown skyline, and eventually settle into the FasTrak lane of the 110 N.
If you were in that same 105 E HOV 2+ lane, with two passengers, and no transponder, heading from LAX to Downtown LA you would need to exit the HOV lane and merge on to the 110 N at the right. If you stayed in that HOV 2+ left lane because you didn’t understand the signage, you would be dumped into the FasTrak lane that requires a transponder.
I’ll leave it at just those examples, because you get the idea. These paricular lanes have been like this for over a decade. Other lanes like this exist elsewhere throughout the state, and more are being completed right this very minute.
Existing Metro ExpressLanes in Los Angeles County, and proposed expansion.
Apple and Google are totally clueless about these lanes, which is bizarre when you can see them represented in maps, satellite views, Street View — everything.
Waze, which is owned by Google, added a “HOV 2+” toggle when it shows you a suggested route with HOV 2+ lanes. This is a mixed bag, because they don’t differentiate between FasTrak and HOV 2+. So no information is presented about transponders, or other rules. Remember that a vehicle could be crossing into and out of various requirements along a route, and that HOV 2+ is an over simplification that could lead to bad directions.
The lack of proper routing extends beyond the directions alone, because it means there’s no accurate representation of the flow of traffic. You’ll notice in some of the screenshots above that Traffic data is drawn on the elevated sections of the road, but that traffic data is from the part of the freeway below.
Generally, when my boyfriend and I take certain FasTrak lanes, we know it can shave at least 10 minutes off of the route a route estimate, or more. However, if the traffic is extremely heavy the route may not be suggested at all, because there’s no understanding that we’ll be bypassing the flow of the stopped traffic. We can trick it sometimes by changing our start, end, or adding a middle position that puts us specifically on a route to see why it’s not recommended, but that still relies on our knowledge, and hunches about things. You’re also assuming the FasTrak lane is completely unaffected by the traffic, which is not always true. That’s not really want you want in the mapping applications from two of the most powerful companies in the world, who just so happen to be in the same state as you.
I know that from a user interface perspective anything added to the interface makes it less clean, and straight-forward to use, so there’s no simple suggestion of “just put toggles in for everything.” However, the reality of different kinds of traffic regulation — whether that’s occupancy, transponders, box trucks, or other forms of traffic control — isn’t going away. Our phones are integral to transportation, so maybe someday, in the far future, we’ll have ways to manage electronic toll collection through them, and then it could all become a part of the navigation process itself.
Both Apple and Google need to tackle this problem in a way that helps their navigation apps perform better, especially when Apple is working on whatever that misguided car thing is, and Google is working on being the infotainment backbone of the automotive industry.
At the very least, they can do better for The Californians.
I felt, well in advance of Tim Cook unveiling the Apple Vision Pro at WWDC that it was nearly impossible it would be a product that appealed to me. Other people knew well in advance that they absolutely wanted whatever it was. It could have been an Apple-branded ViewMaster and they’d want it. I don’t seek to tell those enthusiastic people that they’re wrong, far from it, but I’ll explain why I don’t want anything on my face, and I feel like my explanation might be applicable to other people as well.
After working on stereoscopic “3D” movies for many years I know this well. We would sit at our desks, with our active shutter glasses, and work for hours. We would go to the small screening room, equipped with a projector and polarized glasses, and we would try to get final approval there because it was better than the active shutter glasses. It’s not fun to wear stuff for extended periods of time, even the uncomfortable active shutter glasses that are featherweight compared to the world-building power of the Vision Pro. Thus I was unable to envision anything I’d wear on my face when the rumors were circulating. It’s not for me, and I suspect it’s not for some other people either. There are ways to shave down the device here or there over time, or redistribute the weight — like the top strap only visible in one shot of the WWDC keynote. But it will always require pressing something to your face because that’s how it has to function. Even the electrostatic paper masks we’ve all used leave unpleasant creases on our face, or pinch in the wrong spot (and I gladly masked the fuck up out of necessity).
What I was absolutely enamored by though was visionOS. Not from the WWDC keynote presentation, which just made it seem like a movie computer interface, but from the WWDC developer videos released after. I highly recommend watching those regardless of your level of skepticism about the hardware. Functionally, it seems like such a natural and organic extension of interaction metaphors we’re already using, while at the same time being adapted to inputs in space. What was unclear in the Keynote, was that your eyes are your “cursor” which is natural because they are your focus. Your hands are at your side like you’d have them for a touchpad or mouse. The array of cameras and sensors monitoring your hands and eyes make this all possible.
It made me want to use visionOS … just not with a Vision Pro.
I know that might sound a little contradictory, and silly, but I’d rather sit at my two UHD monitors with a camera pointed at my face, and move windows around inside the confines of those monitors, than wear anything. With all the complaining people have done about not having touchscreen MacBooks, imagine not pressing on anything on a MacBook just to scroll a web page. Hell what if — not to go all Gene Munster — what if they shipped a HomePod/AppleTV that had a series of projectors and those hated passive glasses for people to use exclusively in dark rooms?
I mean, that’s not going to happen, but that’s where my mind went. Apple does apply their effort on one platform on to their other platforms, in some scenarios, so some cross pollination might be possible, but that really is wishcasting.
With the focus on building a headset empire, I guess I’ll return to critiquing that product and how Apple currently pitches using it.
2D and 2.5D windows arranged in space to do office work and web browsing.
Teleconferencing.
2D and stereoscopic theatrical experiences.
3D family photos
Immersive locations.
Notably absent was gaming. Everyone expected Hideo Kojima’s presence in Cupertino to be tied to this headset but it was for porting old games. At this point we should really all know better than to expect anything significant with gaming. It’s for the best that they don’t either, because Apple doesn’t believe in making game controllers. There still isn’t one for the Apple TV, and it doesn’t matter how many times Apple says you can bring your own controller to use, it’s not the same thing. Controllers are a shortcoming of competing VR headsets because you have to use them, but the benefit of having them is mostly physical feedback. Nothing about physical response is present. Functionally every interaction people have seems to be at more than arms length.
Windows in virtual spaces are nothing new, and honestly I’m happy Apple didn’t try to do some bizarre 3D application interface. Emails should just look like emails, and spreadsheets should just look like spreadsheets.
That’s not to say that I have any idea why anyone would want to work on their email or spreadsheets with a headset on. That seems like something for ardent futurists and not practical for large groups — let alone office environments. Even virtualizing screen real estate doesn’t seem to be a tremendous boon if you’re going to get eyestrain from lower text resolution (WWDC videos note that body text weight should be increased to be legible in the headset if that helps visualize how the displays in the headset aren’t exactly like having multiple real monitors).
Safari seems like a better use case, because that is laid-back on the couch stuff. You’re shopping, or reading sub-par blogs like this one, and it’s more about consumption than work.
A poorly received part of the WWDC keynote was the teleconferencing story. Apple doesn’t want you to feel cut-off from people, which is why they have the creepy eyes on the outside, but making a full creepy avatar of a person to have calls with isn’t helping. That persona, as they call it, has more in common with a Sim than a human being. Not just in terms of performance (seriously look at that mouth move) but in terms the qualities we expect in a video conference call.
99% of people are bad at video calls, but they’re still fundamentally people. We see their messy rooms - not particle cloud voids. We see their cats, their dogs, their kids, what they’re drinking, what they’re wearing. For people that don’t want to participate in that we have this amazing technology where the camera doesn’t turn on.
It’s also pretty telling that Apple doesn’t offer up Continuity Camera so the people you’re on the call with can see you, with your Daft Punk headset and creepy eyes because Apple considers that to be a real world solution for talking to flesh and blood.
The real teleconferencing solution is that you just take off your headset.
Apple didn’t invent virtual movie theaters in VR either. They seem to have made it very nice though, by virtue of their displays being better than competitors. It doesn’t seem like it’s a great social experience though. I know that there’s SharePlay, but I mean social in terms of in your own home. This is designed for an audience of one. Which is a valid movie watching experience to have, some of the time, for some people.
What’s particularly interesting is an emphasis on stereoscopic media — which is almost entirely movies from that window of time when “3D” was being used as a way to charge more for ticket prices, and get people in to theaters for experiences they couldn’t have with their HD TVs, or projectors. Then the HD TVs and projectors started to build it in whether you wanted it or not.
Companies realized that it was very expensive to make these stereoscopic movies so they tried to reduce labor costs, and the quality of the stereo movies notoriously went down. Most filmmakers had very little to do with stereo and it was an afterthought for someone else. A requirement of making the thing that didn’t have to do with them.
This is notably why James Cameron’s most recent Avatar movie was used in demonstrations for the press, because he spans that time period from the original Avatar until now as an ardent proponent of stereoscopic movies.
So, as you might imagine, that makes it a little bit of a niche use case and people will mostly be watching good ol’ fashioned 2D on a really big virtual screen.
Also if I hear one more person say that Apple TV+ shows might all be “3D” now because of machine learning to generate depth I will jettison them from this planet.
This is a really interesting use case, just not with the headset. The most chilling moment in the whole keynote was when that guy took a photo of the kids with the headset. There are no “we just aren’t used to it yet” arguments I will accept, nor does the analogy to the VHS camcorders of yore work. This is an inhuman scenario and I’m perplexed that no one working on this presentation had a similar reaction.
What I will accept is some kind of volumetric capture coming to iPhones. The demonstration seemed to indicate that everything dithered into a stochastically pleasant point cloud as it got further away from the subject, so that doesn’t seem like it’s going to be stereographic capture of two images. Some combination of the depth data being captured, along with more than one camera might get somewhere in the ballpark.
Why would people with iPhones take 3D photos if they don’t have a headset and aren’t planning on buying one? I would imagine that there would be that shadowbox-like tray view of 3D, where as you move your iPhone the parallax in the image shifts as if there’s depth. It’s one of the Apple Design teams favorite iOS interface gags. Or perhaps just good ol’ fashioned wiggle-grams, where stereographic images just toggle back and forth. There would be plenty of ways to execute it. It just seems likely they’ll do at least one of them because people who own iPhones will be the likely people to buy the headset and giving them some material in their libraries that they can look at would be more compelling than starting them with nothing and setting them loose on childrens birthday parties.
The immersive location stuff is interesting except it’s never depicted as immersive location with motion. It all seems very QuickTimeVR where you have a nodal view of place. It works well as a backdrop to other interface activities, but you don’t seem to do much with the location itself. That’s fine, I happen to love QuickTimeVR. I downloaded the crappy Babylon 5 qtvr files from AOL back in the day, and I had Star Trek “Captain’s Chair” where you could see all the Star Trek bridges. Technology that we use today for selling homes on Redfin.
I don’t object to it, but it’s interesting how it’s just a “desktop background” of sorts.
I wholly reject this particular hardware, but I’m absolutely fascinated by the software and what it could mean when it’s applied in different contexts. I wonder what the developer story will ultimately wind up being because adding another dimension doesn’t reduce labor costs, and it doesn’t reduce the cut Apple wants to take from developers for expending all this effort for a niche platform. Those sorts of financial things are beyond the scope of my analysis, but are a very real issue. Meta, lead by Mark Zuckerberg, has been having big financial problems because this whole metaverse thing isn’t working out for him. While what Apple is doing is different from what Meta is doing, it’s not so different when it comes to development costs for 3D experiences.
The best part of the Apple Vision Pro might very well be Mark Zuckerberg freaking out and announcing his Meta Quest 3 early, and a Meta Quest 2 price cut, ahead of WWDC and no one cares. Kudos to Tim Cook for that.