The Chickens Have Come Home To Roost

Apple’s statement to John Gruber last week that they are delaying “More Personalized Siri” has certainly sparked some conversation. With Gruber himself going on to write a pretty scathing post about this whole debacle. People certainly took note. A key part of what John wrote was that he believed in the company’s claims, because it would be unlike Present Day Apple to promote something that they couldn’t ship.

Readers of this blog, and my writing for Six Colors, might recall that I’ve been skeptical of Apple’s promises all along. I don’t get to take a victory lap for that skepticism. I’m certainly not cheering myself on for not getting useful software. Woo-hoo, look at me not getting the good stuff!

The announcement about “More Personalized Siri” fit perfectly with my expectations, which has possibly made me the person who is the least rattled by this news. The reason my expectations were so low wasn’t just because of my initial wariness, but because that wariness was confirmed by what Apple has been shipping.

Gruber details some of those features, and how there were demos, which meant that there was some reality to them. I look at it from the other side, where I’m evaluating the quality of what’s available to poke and prod at in public releases to gauge not just those features but Apple’s ability to deliver on future features.

Apple very rarely has the time to refine anything they ship. Version one of a thing tends to stick around for a long time with only extras bolted on, or omitted, because the people involved are simply too busy for a second pass. Because the bar to ship quality software is so low, and the need to revise quickly is nearly nonexistent, there was never any chance that they’d meet expectations for the robust features Apple was promising.

Let’s review what Apple actually shipped as Apple Intelligence.

Text-Based Tools

The scandal that’s received the most attention, prior to this, was the notification summary debacle. Apple tried to defend themselves from criticism by hiding behind the beta label on Apple Intelligence. Jason Snell wrote:

Beta software contains an implicit promise that the developer will actively work to squash bugs and make the product better before it goes final. Adding a warning label in the interim is an easy band-aid, but it doesn’t address the underlying problem. Apple needs to do much more work here, and if it can’t, it needs to turn this feature off until it can release a version it can stand behind.

You’ll never believe it, but the only thing Apple could do was turn the feature off for News & Entertainment apps. There’s no way to refine this to produce the result Apple had promised.

In iOS 18.1 Apple added Priority Mail, which would often prioritize scams because it registered anything with money or a date to be a priority messsage. I saw someone complain about it again today, in March, as a matter of fact.

In 18.2 Apple added categories to mail, which were not really about Apple Intelligence but kinda sorta? Cumulatively, all the changes to Mail have been pretty bad, and it hasn’t been improved. It is a feature that I have turned off. It’s only been recently shipped to the Mac and iPad, and they aren’t any better off with for it. But hey, promise to ship it is fulfilled, amirite?

Then there are Writing Tools, which is something I never think about until I go to use a context menu on iOS to copy, or translate text. Writing Tools is always there. I’ve tried to use it to proofread my writing, but it just spits back out my input and doesn’t explain why it didn’t change anything. It’s also slow, the UI is weird, it’s in a context menu, and it’s only on my iPhone because my Mac can’t run Apple Intelligence. Apple doesn’t use Private Cloud Compute to run this, but any person on earth can open a web browser and use an LLM to do the same thing Apple says you need a thousand dollar phone for.

It’s ironic that the thing that LLM’s are best suited for —mushing up some words— is pegged to hardware, stuffed into a menu, and has an awkward UI. But hey, that shipped!

Lastly, we have Swift autocompletion (which did ship) and Swift Assist (which did not). That this hasn’t shipped isn’t a huge, public-facing issue for Apple, like all the others, but it is another thing that’s damaging Apple’s relationship with developers. OpenAI shipped a ChatGPT integration with Xcode, which should be even more embarrassing. As someone that opens Xcode on occasion, and has taken stabs at writing Swift, the app is so byzantine and strange, with a bloated, overly-decorated language, that assistance isn’t the worst idea in the world.

Image-Based Tools

Image Playgrounds has received a lot of negative press, and deservedly so. The images it produces are quite bad, and because the interface prioritizes selecting photos of you, or someone you know, you get the added benefit of insulting yourself, loved ones, and friends. In fairness to Apple, this feature produces results that are very like what they demoed, because those images were just as wince-inducing. Remember Super Mom? There have been no improvements to the output in any of the styles Apple shipped. You can have “animation” which is a medium not a style, or “illustration” which always makes me look like an angry Willem Dafoe.

Joe 'eating tacos' in both Image Playground styles. The animated one seems alarmingly manic and is holding mangled Old-El-Paso-taco-like objects. The illustration style one is menancing and angry.
WHO WANTS SOME TACOS?

The same can be said of Magic Wand, which has a “sketch” style absent from Image Playgrounds. I don’t know why you can’t use sketch in Image Playgrounds, and I don’t know why you can’t draw in Image Playgrounds itself. Whatever! It shipped! It does things and went to customers! Can’t wait to see what you do with it blah blah blah!

In the grand scheme of things Image Playgrounds and Magic Wand are actually pretty insignificant because they are so deeply uncool that no one wants to use them. If you see someone post an image that looks like it came from Image Playgrounds you will judge them for it.

Genmoji, on the other hand, is what Apple has decided to lean heavily on. They can’t lean on Siri, notifications, or even the dorky awkwardness of Writing Tools. It also hasn’t improved in any noticeable way, and you can still get mangled, Cronenberg-esque images out of it, but that doesn’t matter to Apple. It shipped, put the dancing hippo on the billboard!

A photo of the Chelsea Apple Store in Manhattan with a huge billboard showcasing a dancing hippo Genmoji
Finally, something Apple shipped and can brag about.

The feature that’s the most sound is Clean Up. It’s a decent effort from Apple. I evaluated it in a video for Six Colors. It needs more work, but it’s acceptable. As far as I know it’s not been improved since it shipped, or if it has it’s been in a way too subtle for me to notice. It will smear edges, make polygonal hashes, etc. It’s good enough, and it probably won’t be touched again for years.

Lastly, we have Visual Intelligence, which doesn’t generate images, but will tell you about an image. This is so poorly conceived that I don’t even understand how the concept pitch for visual intelligence was green-lit. It relies on using the Camera Control —you know, that mini button/trackpad that does 48 other things you accidentally bump? What’s one more overloaded function between friends? Except if you have an older phone or iPhone 16e, which will use the Action Button.

It doesn’t make it obvious what will happen when you hit the “shutter” button which captures an image for processing, not taking a picture. If there’s text, it can summarize the text, but it will only show you the summarize button once you tap the “shutter”, and won’t indicate that ability beforehand.

The same is true for translation, which is still a better experience in Apple’s own Translate app because it will show a live text translation overlay as you’re moving the camera.

Without hitting the “shutter” it will show you buttons for Ask —which will only ever ask ChatGPT, and you must agree to their privacy permissions— or Search —which will only ever do a Google image search. Both of these are inferior to using those products independently, and neither provides any of the privacy Apple has been promising about their products.

Craig Federighi leaned heavily on Private Cloud Compute in the marketing since WWDC. No one even knows what it does on a practical, applied level. It doesn’t run private image search models, or private instances of ChatGPT.

Apple did add plants and animal recognition, which was something the Photos app could do, but Visual Intelligence could not. However I have been unable to get it to appear at all, and it also killed my iPhone Photos app’s ability to do plant and animal recognition and I don’t know why.

I can take a photo on my iPhone, sync to my Mac, which doesn’t have Apple Intelligence at all, and it will show the plant recognition leaf symbol as it has done for years. So it’s not like I’m taking photos of unrecognizable plants.

This takes us to another thing about Visual Intelligence: you can’t run Visual Intelligence on a photo that you already took. Unlike a Google image search, or similar, it will only accept your fake shutter button non-photos as input. Again, this is worse than existing products.

Promises, Promises

So, just in that little run-through, you can hopefully see what I see. The problem isn’t just “More Personalized Siri” not shipping, the problem is what did ship, and what that portends for all future releases. Software quality is out the window, so for “More Personalized Siri” to not meet the low bar of something like Visual Intelligence…

The thing about “AI” (chatbots) is that it really did take Apple by surprise, but chatbots are merely a tool that can be used. The most logical place to use it is in Siri, the thing you chat with, but people are only clamoring for that because nothing Apple has done to improve Siri has been sufficient. Let’s go back to Apple’s statement, through Jacquelin Roy to Gruber, where they left him holding their bag of empty promises:

Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT.

I said on Mastodon after quoting that part:

I don’t want to minimize this effort but these have not been transformative, and I frequently see these criticized - like the ChatGPT integration getting things wrong that the ChatGPT app gets right (for those who care about ChatGPT). This hasn’t been six months of success.

What they did ship for Siri was as damning as what they didn’t. None of those things matter, or do anything significant. The only thing they did with Siri in the past year that was significant was add the new visual language for Siri. I believe every Apple pundit under the sun has been in agreement that that was a huge mistake because it signaled change where there was no meaningful change.

Last week, I asked my boyfriend what time he wanted to eat lunch. My wrist must have been elevated, and apparently Raise to Speak was enabled, even though I don’t remember turning that back on. Siri responded on my Watch with “I don’t eat or drink. But I always have an appetite for a good conversation.”

That’s conversational Siri. That’s the results of years and years of effort on Siri. That’s because of the writer’s rooms generating canned responses to questions. That’s years and years of shipping updates to Siri. That’s the full power of a fully operational Cupertino brought to bear on misunderstanding what was happening and doing what it shouldn’t.

There was never any world where “More Personalized Siri” was going to ship. Even if they had a demo, I wouldn’t have any faith it would survive in real-world use. Much like I don’t have any faith in Amazon’s Alexa+ that was very carefully announced and demoed.

I know that Apple has made many mistakes in the past —the one that this is the most similar to is Apple Maps, not AirPower— and it’s true that they have runway to continue to work on the execution.

When third party solutions fall short on Apple’s platforms it’s not a problem. When first party solutions fall short on Apple’s platforms it is a very big problem. When Copilot barfs on some code, oh well, you’ll tweak it and run it again, it’s not like it’s built into the platform. Copilot duplicating import statements also isn’t mission critical, like what time your appointment is, or when the flight with your mother arrives. There is a difference between working on a task, and living your life 24/7 with an assistant.

Threats and Partners

I don’t think that Open AI’s ChatGPT and Microsoft’s Copilot are a threat to Apple’s revenue. They’re going after reducing labor in the workplace, which means reducing the workforce so they can collect money from what would have been salaries for employees. That they have a consumer angle is only to reinforce their lead in the workplace with people asking to use those tools, which will cost more and more over time.

Neither one of them would do well with making a smartphone (Microsoft, especially, has learned this lesson). They can, however position themselves to sit in the place that Google occupies for providing services to consumers. This is why ChatGPT is integrated with Siri.

Meta, likewise, is not going to have another fiasco making a phone. They’re angling for synthetic “content” and synthetic accounts that people follow which can be tailored for engagement and advertising purposes. This will also be very lucrative for them without having to ship a phone.

Google’s Gemini is actually the one place where Apple needs to be concerned because Google is putting Gemini into phones. Which was part of the reason it was bizarre to hear Apple executives openly discuss Gemini as a possible candidate for their integration with Siri.

That takes us full circle to Google supplying the Maps data in iOS. A dependency that could be leveraged to get information about Apple customers, and to fortify Android as a competitor that offered the same mapping abilities for less than Apple.

How much of that is Apple hoping to shift its lucrative search deal with Google to a Siri Integration deal where they can capture sales revenue from compromised customer privacy? How much does Apple want to make an App Store for Siri integrations where they can one day financially benefit from being the middle-man?

I’d bet that’s something, but it’s probably mostly because they know they’d be able to ship it and it would look like Apple was doing visible things with Siri that were going out the door.

While no one is really going to take Apple at their word for any Apple Intelligence features they announce at WWDC 2025, they’ll be able to believe in work from third parties that already have products. The cupboards are pretty bare for anything Apple can ship, and they clearly can’t go back and improve anything.

Maybe Apple is hoping that their much-feared visual refresh of each OS will be enticing —just like the Siri glow was dangled in front of customers without Siri improvements to back it up. This turd is still a turd, but it now has an emphasis on transparency and depth.

What I’m mostly anticipating is a continued drop in quality and standards, as evident by what’s been shipped this year. On multiple occasions people have reported Apple Intelligence being re-enabled for them (even in macOS 15.3.2), or there was that time Image Playgrounds was advertised in Settings.

This transformative year, where everything got just a little worse in Apple Land. Where if you complained about Apple Intelligence online some drive-by commenter was as likely to tell you it would get better, or tell you to turn it off. Nothing’s getting better, and turning it off won’t make a problem of this magnitude go away.

2025-03-13 14:55:00

Category: text