Obviously, this is a very entertaining podcast episode. I really enjoyed listening to it as a fun way to traverse the history of the Mac without being a dry encyclopedia entry. For fun, I’ve “played along” with my own answers below.
My mom’s first Mac was a used Mac Plus she got from my grandfather, with an HD20 external SCSI Hard Drive that had, you guessed it, 20 MB of disk space. It ran System 6.0.8 that was installed via 1 million floppy disks. I was a kid, so one of the few activities I had available to me was to mess up something and reinstall the OS.
It was my first Mac, but not the first one she bought or the first one I bought. It was underpowered for her work, so she bought a Macintosh Quadra (audience leans forward) 605 (audience is disappointed). The Quadra 605 was her computer, and the Mac Plus became our family computer. We would work on school papers in ClarisWorks 2.0 (or sometimes I would just spend time making various gradients because the black and white dithering was fascinating). Then the Quadra 605 became the family computer, and it limped along with a SupraExpress 33.6 modem, and AOL 2.5, into the internet age before being discarded for a Compaq Presario mini-tower thingy, and an old 486DX that my grandfather gave us. I bought an old Quadra 700 on eBay at one point thinking that I would… do something with it, and then a Performa 6-something because it was my first Mac with a CD-ROM drive, but it was ultimately a useless waste of money. Regrettably, I told my mom to donate them or something when she moved, which was incredibly short-sighted of me if you see the prices Quadras go for on eBay these days.
My real first Mac that I bought with my own adult money, to do real stuff was a brand-new MacBook Pro 15-inch in July of 2007, and I loved it. I still have that one, and fired it up this summer to write about Shake.
This is a tough category because I had far fewer Macs in my possession than any of the panelists on Upgrade. There were Macs that I wanted a lot — especially in the late 90s. The one that I really wanted was that very first G4 tower. The styling introduced with the G3 tower was great, and the first G4 was a pretty refined take on that. I thought that the later G4s got progressively sillier in appearance, even though they were more powerful, and the cheese grater G5s, and later Mac Pros were a little too serious compared to the flowing lines and materials of the G4. We even had a couple of those G4s in the yearbook computer lab, so I knew first-hand that they were pretty great, and thus, my favorite.
I think the best Mac I would have to go with the brand-new M3 MacBook Pro 16” which can just do everything. If price was no object, I would have one right now. It’s tough to pick between favorite and best here, but I’ll stick with best.
This is where I was sniped by Gruber with Safari, and Dan with Terminal. I won’t cop out and pick one favorite, and one best, like I did above. Instead, I’m picking QuickTime 7 Pro. No, not QuickTime non-Pro, or QuickTime X, but genuine QuickTime 7 Pro. Accept no substitutes (because Apple never made one!)
It was a transformative media framework, application, editor —everything. QuickTime X never matched it, even though it has some nice stuff too, and a lot less brushed metal.
AirPort Extreme! The flat fifth generation one (not the ugly, tall one) was a transformative product for me. Every piece of Wi-Fi equipment I’ve ever owned before, or since, has been a flaky piece of shit. It was so friendly, and easy to use. I could just plug in my printer and leave it very far away from me, where it belonged.
Obviously, I would have picked the butterfly keyboard, as Dan did, but I’ll go with my second choice: Apple buying and killing Shake. That is some very niche shame, especially compared to the keyboard, but it was software that I used for my actual job. They didn’t know what to do with it, and botched that, permanently changing the software landscape for the VFX industry. (Jim Gaffigan Voice: Wow, he really talks a lot about Shake. Is he OK?)
John Siracusa has a good post on his blog summarizing some of the concerns regarding generative AI. There’s a lot of other kinds of “AI” in the news these days, but this is specifically about material wholly generated from text based prompts, using computer models trained on other images.
This question is at the emotional, ethical (and possibly legal) heart of the generative AI debate. I’m reminded of the well-known web comic in which one person hands something to another and says, “I made this.” The recipient accepts the item, saying “You made this?” The recipient then holds the item silently for a moment while the person who gave them the item departs. In the final frame of the comic, the recipient stands alone holding the item and says, “I made this.”
This comic resonates with people for many reasons. To me, the key is the second frame in which the recipient holds the item alone. It’s in that moment that possession of the item convinces the person that they own it. After all, they’re holding it. It’s theirs! And if they own it, and no one else is around, then they must have created it!
The act of creation is a tricky thing to pin down with people, because someone may not realize the ways in which they were influenced by something they saw before. It ought to, theoretically, be easier to pin down sources (the training data) a model uses, but the people making the popular models right now would really prefer if you didn’t.
If you feed a prompt into a model, you post the result as your own, and you get a cease and desist letter in the mail, then you suddenly flip-flop on who’s responsible for this image. Instead of you creating AI art with your carefully crafted prompt, the infringing work is the fault of an opaque data set you couldn’t possibly be held liable for.
It’s also not just copyright you have to worry about, but using bits and pieces of images from a dataset including CSAM, or other morally repugnant things you would not consciously include in your work. There are people who are looking at those things, for $2.20 an hour.
Do all the people that used Stable Diffusion during the time it had a tainted dataset need to post a disclaimer on the image they’re claiming ownership of? No, no one’s going to do that, because that’s not their fault.
The thing that really raises my alarms is how companies are training people to use generative AI to fill their social networks, publications, and sites with untraceable, generative AI images. In fact most of this post was originally part of a draft titled ‘Gray Goo’ I started in December when companies were bragging about their end of year AI progress, such as this boastful post from Meta.
In science fiction, the concept of gray goo exists to describe a hypothetical scenario in which self-replicating machinery outcompetes and replaces organic life. Generative AI models are not currently self-replicating, but we are feeding generative AI output into other algorithms, like image search results, or social network feeds.
If you look at the Explore tab on Instagram you have an algorithmic feed of thumbnails containing images. If you tap through into those you might notice that a few of the accounts posting the images are aggregators. They say things like “DM for credit or removal” as if the person who posted it didn’t know how they got their hands on the image in question.
These are usually topic based, like certain dog or cat breeds, desk setups, architecture, film photography, or whatever. The aggregators can also have links in their profile to shirts, or other merchandise or services, that they are selling to collect money for their hard work in reposting other people’s stuff.
Spend some time looking around and you’ll also see some Midjourney or Dall-E-looking images that may or may not have #AI on the post, or have already been laundered through one of those aggregators and contain no disclaimer or sourcing. (A word of warning, the #AI hashtag on Instagram is seemingly unmoderated and contains some anatomically questionable material. (No, I’m not just talking about fingers.))
The ultimate goal of these platforms is to have stuff, and not just any stuff, but filler between ads that they serve. People can see and mimic popular posts, and memes, on social platforms, but that still requires labor. Reduce the labor and you can increase the amount of new filler. That includes different variations on filler to fit new formats, like image tools to make horizontal aspect images vertical to increase Stories, or add animation to increase the amount of video filler, which means more video ads.
This also increases the number of people that wouldn’t feel guilty for uploading some untraceable generated stuff, because they are not knowingly copying anyone, like those aggregators are. They get the serotonin without the fuss, or the guilt.
So, it’s not just “I made this” of the user and their generated image, but “I made this” of the companies who want to sell ads against the endless stream of “user generated” images that they have safe harbor protections from. After all, if there is a problem with an image, there are existing moderation tools (that mostly don’t work, but whatever!) to deal with moderating things individual humans need to actively file objections to.
Consider why companies want you to use generative AI tools. It’s not altruism about democratizing expensive software.
You can’t really put the genie back in the bottle here, but you can regulate the fuck out of that genie’s datasets, which will make it much less attractive to exploit people. Back to John:
In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act.
There is also the possibility that people will lose interest in ubiquitous prompt-based tools because the work produced may be less special, or unique. That might also lead to a shift away from prompts that generate the whole enchilada to more of the tools that assist in editing and image manipulation that’s more human-centric.
Art is often a reaction to current trends in art. Everyone is painting realistic stuff? Let’s paint surrealistic stuff! Photography is clean, and crisp, and digital? Here’s the grainiest film you’ve ever seen in your life!
If the datasets can’t absorb current trends, or changing tastes, then they can’t easily be used for sole authorship of unique works that ultimately need to appeal to humans, not models.
I was instantly hooked from the first paragraph and had trouble putting the book down to go to sleep. The protagonist, Murderbot, feels like a character that was written to specifically appeal to me. Its point of view is drenched in sarcasm, and it’s bitter, but completely resigned, to do its job — no matter how much it says it doesn’t want to actually do that.
“Can I ask you a question?” I never know how to answer this. Should I go with my first impulse, which is always “no” or just give in to the inevitable?
I know, how could something like that appeal to me?
The first novella is a complete story, so there’s no worry about a cliffhanger, or things being held for the next book. I was a little reluctant to commit to a series that was going to dole out stuff over many books, but that’s not what this is, it’s much more episodic. Which makes sense because there will be a TV series based on the books.
The novellas do give way to novels, but they’re still just as interesting, and don’t slow their pace to stretch out and fill the format.
Anyway, I read the full series in nine days, which isn’t a record for a voracious reader, but is a record for me. Then I wept, for I had no more Murderbot books to read.
When Apple announced that swiping up in watchOS would bring up a new Smart Stack interface instead of the Control Center, I shrugged. I really didn’t think it would be a big deal. People complained about it in the beta, and it just seemed like the kind of thing that requires a certain period to adjust to and then we’d forget it used to be different. Boy was I ever wrong about that.
The hold-down-to-change-watch-faces thing lasted from September 10th to November 15th, so clearly that one didn’t go over well. Moving the Control Center to the side button persisted, without any preference or option to revert. Maybe fewer people were pissed off about it? Maybe management just believed in the usefulness of Smart Stack so much that they were unwilling to give up as easily?
However, last night, when I was on a plane flying to California from Florida, everything went into Sleep mode because the time was EDT, not PDT that we were barreling towards. My Face-ID-With-Mask-With-Watch-Assist stopped working and the phone told me that I would need to take the Watch out of Sleep mode. I did the thing that I have done for years, and years, and swiped up to … oh right, this is the Smart Stack.
Then there’s the delay to dismiss it, then hit the button, which requires the application of force, not just a swipe, and is more comfortable to do with thumb and index finger.
I was agitated by something that was arbitrarily made worse. I didn’t feel like a fool for doing the wrong thing, or blame myself in any way, because even if I pushed the button first I still would have been irked.
Perhaps, what’s so frustrating to me is that the Smart Stack is so useless to me, specifically. I can’t speak for everyone else, but all of the times I’ve accidentally opened this treasure trove of irrelevance it’s displayed the day of the week, the month, the date, the goddamn time, and then a card that’s either a snippet of my almost entirely empty calendar, or the weather. All of this information (except my mostly empty calendar) is better laid out in my Modular watch face, using complications.
I can add and remove the widgets/cards from the Smart Stack, just like the significantly more useful Smart Stack in iOS, but I can’t replace the useless date and time stuff. Also another thing that makes it’s iOS counterpart more useful is that it doesn’t reshuffle the cards, so I can remember which way to swipe on an iOS Smart Stack to get to other widgets.
Most importantly the widgets in the Smart Stack on my iOS Home Screen display ambient information akin to Watch complications display of ambient information. It’s in the interface I’m glancing at, not some other destination.
Naturally, this means I have no reason to open the thing. Ever. Under any circumstances. And yet, I somehow get both the swipe up, and the upwards swipe on the Digital Crown to get to it? It deserves two special gestures? Is it that beloved and adored?
I’m sure some people do find this useful if they prefer to use one of the watch faces with fewer complications, or smaller complications. It’s a computer watch but if you want it to feel like a Swatch Watch, go for it. The stuff in the Smart Stack will always be reorganized in some baffling way when you want to get to it, but you do you.
Having said that, give me the swipe back.
We did it for Natural Scrolling, we can do it for this.
Or hell, go ahead and make us sorry we wished for it back by making a Smart Stack widget that opens the Control Center. Make single serving Smart Stack widgets for Wi-Fi, Airplane Mode, Sleep — really make us sorry we complained!
I just don’t want to need to do something, and either accidentally swipe, or remember to press the button. Even a successful interaction is as annoying as a failure.
It’s a bummer, but what are you really going to do about it?
I’m not on Amazon’s side here at all, but realistically what are you going to do about it?
Nothing!
I know you. I am you. We are one.
We’ve put Amazon in a position where it is integral to how most of us live our lives. Amazon Prime shipping can’t be beat, and there are all the other ancillary benefits.
Prime Video is one of those ancillary benefits. It was a nice-to-have, and it still is. The only move you have to protest Prime Video’s inclusion of ads, is to cancel Prime (you won’t do that) or to stop using Prime Video “to send a message” (they don’t care).
No, this is not merely because The Rings of Power is expensive, or because Citadel flopped. Those shows never had the burden of needing to be profitable because that’s not how Amazon earns money. They didn’t even have to worry about subscriber growth because their shows were never driving subscriptions. Amazon is a retailer.
People are still completely oblivious to why any of the subscription fees are going up. They were too low. They were financed by investors that prioritized growth, when investment money was basically free.
Then investors wanted profit, not growth — The Netflix Correction — now it’s about generating money. So Prime Video, which was run as some “do whatever” business that basically amounted to brand advertising, and did weird stuff like buy MGM for way more than The Rings of Power and Citadel cost, now has to turn a profit.
Ads, ads as far as the eye can see.
The interfaces are already stuffed with them, which was the point of my blog post back in October. Janko Roettgers interviewed Amazon’s director of Fire TV advertising, monetization and engagement Charlotte Maines and the ads will basically continue until they see some kind of financial penalty. And for Prime Video, since this is all overhead being traded for ad-supported, would have to be a big, big shift. Mass cancellations. Dogs and cats living together.
There won’t be mass cancellations of Prime over ads in Prime Video because it is too integral to your life. You can’t shame Amazon into anything.
Also, people seem to have this opinion that other companies don’t have ads, so Amazon is a second-rate streamer (whatever that means), but they all have ad-supported tiers, except Apple TV+, and that’s just a matter of time. Apple is unlikely to switch people enrolled in Apple TV+ to the ad-tier, instead opting to add the ad-supported tier as a new option, but when they do they’ll very likely raise the rate on the ad-free tier, like many other services have done.
Two different approaches are playing out as streamers add advertisers: opt-in and opt-out. Netflix is in the former camp, creating a separate, cheaper tier that’s different from its core subscriber base. The goal isn’t to move its foundational base to an ad-supported tier but rather to continue scaling by bringing in more price-conscious and less invested customers through a cheaper product. This is a strategy focused on customer acquisition.
Amazon has all the leverage here. Your option is to give them the same, or more money, since you realistically won’t cancel. They don’t have to entice you to a lower tier, or drop prices, or raise Prime overall.
Sure, over time, these things can have a cumulative effect that might cause cancellations, but Prime can course correct. To that end, all of these fees are easier to adjust on the fly than the overall Prime rate (like when Amazon Fresh increased their “free delivery” to $150 in February of 2023 and dropped it back to $100 in October of 2023 because the hike seemingly didn’t work).
Remember that Prime is the ultimate bundle, and that not every other company is in this position. Increasing the cost of the overall bundle is going to cause actual cancellations, increasing the cost of the unprofitable parts of the bundle is an effective strategy.
Thanks to how they account for things we’ll never be able to make it unprofitable for them - because it was always unprofitable. There’s nowhere to go but up, to ads, and profit.
Matt Birchler wrote a nice blog post, and made a YouTube video, about his favorite Mac tips. Honestly it feels like, “I know the Mac” and then you check out something like this and realize that there’s some overlap, but there’s also stuff you did not know.
In the case of Matt’s tips it was for Command+Option+I which opens a Get Info window that changes based on your selection, including your selection of multiple items. This is like the info pane in the finder window I usually use, but so much of that window is taken up by image previews I usually have to scroll down to see other file info.
A couple of weeks ago, Merlin Mann posted on Mastodon (and talked on RecDiffs) about the long list of keyboard shortcuts that are available to Mac users. What particularly piqued my interest was the keyboard shortcut to get to the Help menu’s Search field, Command+Shift+?.
I periodically use that Search field under the menu to find functions in an application when I can’t remember where they are, or if the app can even do that thing. I think most people assume it searches help documents. I absolutely didn’t know about the keyboard shortcut though because there’s no shortcut listed next to that Search box. It makes it into a kind of Spotlight/Launchbar/Alfred thing where you can, without leaving the keyboard, just hit the shortcut and type whatever command you want to do in the app and hit enter.
We’ve all got these things that we each accumulate into our mushy, little monkey-brains, or even habits that we don’t perceive to be “tricks” or “hacks”. There’s huge overlap in that knowledge too, where we might think someone is just cynically posting about “lifehacks” to score some clicks, but more often than not, even a cynical post might have something in it. We just don’t know what other people don’t know. A bubbling froth of Venn diagrams.
There’s also knowledge that gets lost over time, and new ways of doing things that veteran Mac users don’t know. Jason Snell recently wrote a very good post about Mac defaults being enough for most people. It’s great to rethink how much stuff we’ve accumulated might not be entirely necessary.
Robb Knight brought Hemispheric Views’ Duel of the Defaults to my attention, and the attention of a lot of other people. He’s compiled a running list of peoples’ default apps for things. They’re not tips, exactly, but what apps to use is kind of a tip.
I have no fancy Finder tips. I just have strong preferences for how I use it. Command+J:
Always open in column view.
Browse in column view.
Group by Date Modified
Sort by Date Modified.
Text Size 12.
Show icons.
Show icon previews.
Show preview column.
I’ve used Macs since my mom’s Mac Plus running 6.0.8 so I know all about the classic way to view things in a Finder window, but I frequently find that I need to do something with the hierarchy. That means I also ‘Show Pathbar’ at the bottom, and ‘Show Sidebar’ to get at locations quickly.
Another thing I occasionally need to do is get a path from the Finder into Terminal. All you have to do is drag a file or folder from the Finder window to the Terminal window and it’ll paste it right there. For the reverse, you type open . and then a Finder window will appear with the directory you were in in the terminal open so you can do things that are easier to do from the Finder, or easier to do from the Terminal.
Another thing I do is under Finder -> Settings -> Advanced. Show all filename extensions. It’s not as neat, and clean as hiding them, but too often I find that I want to just know if something is a png, or jpeg without having to select and interrogate each individual item. I wish there was a way to hide the extension for apps only, because there are plenty of context clues about what’s an app. You might have also done that thing with a text file where you typed an extension, thinking that you changed it, just to realize you now have a file called “post.markdown.txt”.
In those same settings change “When performing a search:” pulldown to “Search the Current Folder”. Usually I know I want to find something in Dropbox, or in a directory with all my blog posts, or podcasts, and I don’t want to search the entire Mac, slowly, and get a lot of noise in my results.
In Finder -> Settings -> Tags I recommend turning off any tags you don’t use (that’s all tags, for me) because they clutter up context menus and the Sidebar.
My biggest tip for Safari is to customize your toolbar by right-clicking on it and rearranging that mess. While I like the Sidebar in the Finder, I haaaaaaate the Sidebar in Safari and prefer to use the Favorites Bar. For some reason, even though the Favorites Bar is attached to the same user interface elements you can edit with Customize Toolbar, the option to enable it lives in the View menu.
An option that really helps me everywhere in the OS, but especially Safari is an Accessibility setting. Settings -> Accessibility -> Display. Then turn on “Show toolbar button shapes” so you can, you know, SEE WHAT IS AND IS NOT A BUTTON. Big help in Safari’s toolbar, no matter how you choose to customize it.
Another setting particularly useful in Safari is in Settings -> Appearance -> Show scroll bars. Set it to either ‘Automatically based on mouse or trackpad’ or ‘Always’. For some reason I find it really distracting when the scroll bar element appears and disappears, especially if I’m reading something kind of long, or going back and forth in a reference document. Maybe I’m just old.
A new feature that I’ve really been taking advantage of is Add to Dock. I wrote about it for Six Colors, but the short version is that it makes a little ‘app’ thing that runs a lightweight, sandboxed instance of Safari. The ‘app’ doesn’t even need to live in your Dock once you make it. It’s worked well for certain tasks that I don’t want to bury in tabs, work better at a different screen resolution than my main Safari window, or want to keep separate cookies and profiles in. Experiment with it — it’s fun!
I’m pretty disappointed in Apple Music (as an app, and as a service), so I wanted to try Spotify, without downloading Spotify’s app. Works great with Add to Dock.
The only keyboard shortcuts I use frequently in Safari are Command++ and Command+- to increase the ‘zoom’ of a web page. You’d be surprised how many other apps also accept those keyboard shortcuts.
I love this feature of macOS and have used it for years. It’s in my muscle memory. The lower right corner brings up Mission Control and the lower left corner puts my Mac to sleep. I use these two things multiple times a day, and they never break, or throw errors. It’s utter dependability is why it’s a natural gesture.
I miss After Dark’s flying toasters as much as the next person, and I do like all the fancy screen savers that we’ve had over the years, but my current bar is set at a screen saver that won’t spin up my MacBook Pro’s fans. Screen Saver -> Photos -> Options -> Colors. Then I know my Mac isn’t asleep, but also isn’t taking off into low Earth orbit.
I love this app. I use .00001% of its features, but the features I don’t use never get in my way. It’s for notes, snippets, drafts, grocery lists because of Task Paper stuff, just anything and everything. I don’t use it for writing longer form pieces, but they may start there.
This is my favorite text editor to write in because I can’t fuck around in it. It’s from the era of “distraction free” markdown text editors with iOS and Dropbox support. Kids these days may not appreciate that little sliver of development history, but I’m grateful that Byword keeps chugging along. It has neither bells, nor whistles, and it’s perfect. Occasionally, I will change the monospace font I use in the app, but I can’t waste my time on themes, or complicated outline structures. A nice feature is that it has multimarkdown YAML front matter - which means nothing to you - but it’s how I write for my blog, with Title: being the first line, and whatever I enter as the title is what it would like to save the file as. It’s just a nice feature.
This is my preferred code editor. I used to used TextWrangler, R.I.P., but the free version of BBEdit suffices for most of what I need to do. I’m not a developer, I just write code that breaks for free. It is probably the app where I have to use the Help -> Search function the most.
I would be lost without Transmit. It takes care of all my FTP needs in such a beautiful, delightful way. It does way more than that, but I’m a simple man.
This is essential for my setup. When I dock the laptop the audio inputs and outputs could wind up with anything - or even after a restart. Using SoundSource insures my audio is where I expect it to be, and it has handy per-app overrides, which is useful when I want Zoom, Skype, Slack, and Teams to use my headphones attached to my ElgatoXLR, but I want my system audio to be my speakers attached to my laptop dock. It’s very handy.
I wanted to put the weather, according to my Home temperature and humidity sensor, in my menu bar, and One Thing was perfect for it. You can format whatever you want to and fire a Shortcut that updates the menu bar. Shortcuts is awful, and the Home integration with Shortcuts is fragile and entirely incapable of being debugged, but that’s not One Thing’s fault because it only does … one thing.
I was a longtime Alfred user on my previous MacBook Pro. I think it was Alfred 3? I never got around to installing it on my current MacBook Pro in 2018 because I wanted to see if I could get by with Spotlight. Spotlight’s pretty great.
Recently, mostly because of my unrelated frustrations with Shortcuts, I’ve been getting back in to Alfred because it has its own workflow automation system. It uses a nodegraph, and we all know how much I love nodegraphs. I’m currently tinkering with some stuff to improve my day-to-day Mac experience and get things out of what increasingly seems like unmaintained automation software.
I resisted buying Bartender for years because I just didn’t really have a need. Then, as I’ve been working from home, I’ve accumulated a lot of things that I’ve had to install, for work-related connectivity that I otherwise wouldn’t use or want to look at. I finally broke down and banished all my stuff.
Honestly, it’s probably the best app they’ve made since introducing Creative Cloud. Lightroom Classic always felt clunky to me, but Lightroom is nimble, and is just as fully featured when I’m on the road away from my computer. I never have to worry about where a photo is saved. Big endorsement from me. It’s also way better than Photos, which is almost never in sync with anything else, and lacks decent tools.
This isn’t really software — it does come with some heinous Logitech software — but it changes the way I use my computer. The thumb-paddle-shaped part isn’t just comfortable to hold, it does the equivalent of a three-finger swipe on a touch pad to change Spaces and full screen apps (which are spaces). I use a lot of software that needs to take up one, or both, of my monitors and you really need a way to switch through them quickly. It makes a big difference, even over using Hot Corners for Mission Control, because I don’t have to hunt for anything.
Like I said at the start, there are huge areas of overlap with what other people do, but there’s also stuff that people don’t mention because they think it’s either too obvious, or it doesn’t occur to them. Whether anything Matt Birchler, Jason Snell, Merlin Mann, or I said would lead to anyone changing anything about how they use their Macs is sort of besides the point. It can confirm you’re doing exactly what you should be doing, and people like me are nuts for using column view.
Don’t assume other people already know everything and keep silent because you’re missing out on some tiny thing that might be useful.
In a rush to get everything done before the holidays tvOS 17.2 was pulled out of the oven too early. I have strong opinions about tvOS, and obviously hold it to a high standard, but I don’t see how delivering something by this arbitrary deadline qualifies as progress when the thing being delivered is so incomplete, and buggy.
Unify media and apps into one interface, with the ability to pin favorite apps.
Reduce the amount of Apple TV+ promotion in the interface, particularly for non-subscribers.
Properly personalized recommendations based on viewing habits.
Handle live TV through a unified programming guide, like Amazon does, instead of pretending the only live TV is live sports.
The new interface miraculously resolves none of these things.
A brief recap of that post: Instead of a horizontal row of pill-shaped buttons in the TV app, it’s a vertical sidebar that auto-hides. The experience still centers Apple TV+, and now MLS Season Pass. None of the core functions can be moved around or hidden. I hope you like those things because they live there forever, above anything else.
It sucked in the beta, and it still sucks now, because nothing was improved at all about it since its introduction in October. Except for some reason “Watch Now” has been renamed “Home” in the TV app. So now you have an Apple TV with a Home Screen that has a TV app that has a Home Screen. Deep, sharp inhale. Long, slow exhale.
“The redesigned Apple TV app makes it easier than ever for users to watch the shows, movies, and sports they love through an intuitive interface that brings content to the forefront,” said Eddy Cue, Apple’s senior vice president of Services. “With so much available to watch, our aim is to ensure users always have their favorites at their fingertips.”
No it isn’t. What’s at my fingertips are shows that I don’t subscribe to. Regardless of the quality or award-winning nature of those shows, I do not have an active subscription, and shoving them under my fingertips does me no favors, because what I want to do is not at the forefront as long as it’s taking up that space.
The new sidebar navigation also introduces Home, a unified guide for all the shows, movies, and sports viewers love. Within Home, the Channels & Apps section allows users to browse each of their subscribed channels or connected apps in depth. And collections — including New Shows & Movies, Top Charts, Trending, and For You — bring forward the best recommendations for viewers to enjoy across what’s new, popular, and tailored just for them.
This is absolutely not how this works in practice. I invite anyone in tvOS 17.2 to scroll down the sidebar to Amazon, Disney+, Max, or another major streamer. You’ll see a very sparse, depersonalized interface that shows you very little media at all, let alone media that’s relevant to you, specifically. This is in stark contrast to the media displayed inside of those apps. This is utterly barren in comparison.
It utterly fails at being a replacement for even the worst, buggiest, third party app I have, and I’d rather use the treacherous Prime Video app than deal with what the TV app thinks is on Prime Video, because the TV app doesn’t really know.
On living room devices, the sidebar will also feature profiles, allowing households to quickly switch between users for better personalization in Up Next and content recommendations across the app.
This still doesn’t work in practice. Third parties really haven’t bought in to Apple’s user profiles. People also don’t always watch movies and TV shows in their own profile (Apple or otherwise). Also the only thing in the interface that shows personalized recommendations is waaaaaaaaaaaaaaaaaaay down at the bottom of the TV app’s Home Screen. It currently takes me 16 swipes down to get to that personalized “For You” row, and 23 swipes to get to “We Think You’ll Love These Action-Adventures”.
Right under those fingertips!
Do the people working on this product not know what personalization is? It’s an interface where I can’t move anything, to even narrow down my interests, or even the Apps and Channels that matter to me more than the others. Charting, promotions, editorial content all buries personalized recommendations, and there’s no consideration for what mood I might be in. Compare this to any major streaming app at all and you’d see how weak and pallid this personalization is because it can’t match, let alone extend it, and they own this whole platform, not just a singular app.
I don’t have a lot of love for the remnants of the iTunes Store inside of tvOS. The Movies and TV Shows apps never had good navigation, and felt entirely alien to the way other, modern media apps felt.
I’m not really won over by its replacement in the TV app, or the execution of the transition. Someone at Apple must have, wisely, figured that a lot of users probably had TV Shows and Movies somewhere in their Home Screen, and because the Home Screen is a left-aligned mess, removing two apps was sure to cause a lot of app reshuffling and confusion after upgrading (just as the sixth column did a few months ago). Those apps now just serve as splash screens with redirects to the TV app Store and Library views.
They didn’t do anything to spruce up the experience of navigating your library. It’s a sea of 16:9 tiles that can be filtered by genre, but the genres are still bad. Animation has a cute little Mike from Monsters Inc. style icon, but all my Pixar movies are in “Kids & Family” - whoopsie!
Also a lot of my Library is incompletely categorized. “TV Shows” contains only Battlestar Galactica episodes that I’ve purchased. Tapping on it reveals all the seasons as rows, with each row having all the episodes horizontally tiled across, instead of something sane or practical like a season navigation element and an episode list. Also if you bought a few episodes here or there, and not the complete season, the interface doesn’t show you the missing episodes or how to get to them in the store. The episode consists of a thumbnail that will play the episode, and a truncated text description under it that can be expanded into the exact same amount of text.
Also BSG is my only “TV Shows” show, all the other TV shows I’ve purchased are strewn about in the genres. “Comedy” has shows like 30 Rock and “Drama” has shows like Mad Men.
The Store fares a bit better, but only because it has rows based on ratings, popularity, sales, and genres, and they all have much more information when you drill down into them because it’s showing you the information you see when you ask Siri for a show (more on that later).
Unfortunately, there’s no cute iconography here, and the genres are these over-sized 16:9 icons that are just a color gradient overlayed with something from a stock photo library. Not really a step backwards for the iTunes Store at all, but not really moving anything forward.
While the TV app’s Home Screen (grumble) has two rows of lightly personalized media recommendations, there’s no personalization in the Store view at all. Nothing based on your purchase history, library, or streaming habits. Everything displayed here is the same as it is for everyone else in your region. An editorial, TV-Guide-magazine-like approach that isn’t modern and doesn’t meet user expectations from the other streaming apps they use.
A huge change that I didn’t write up (well, I started to last week, but then the OS needed to ship so here we are!) is the change to the Siri button in tvOS 17.2 because that was shoved in to the last two release candidate betas.
Like you pulled the underbaked cake out of the oven and sprinkled sugar over the top because you forgot to add it earlier.
Tapping the Siri button in compatible apps will bring up a search box element at the top of the screen where you then need to hold down the Siri button to dictate a search. It will then whisk you away to the Search app, which will conduct the search and display the results there. OR, if you’re in the TV app it will do the search inside of the Search view of the TV app, which will show the same results as the Search app, but doesn’t have the same filtering. Tap and hold inside of the Music app gets you a Search inside of that app (except on my 4th gen Apple TV).
Holding down the Siri button will still, eventually, bring up that spinning orb and you can do all the same Siri stuff you’re used to with the orb. It will display it’s search results in the same ephemeral overlay as before.
So why did I say “compatible” earlier? Well, right now you can only tap to dictate a search in the Home Screen, the TV app, or the Music app.
The wording in the “What’s New” splash screen — “Press [Siri button] to dictate a search from the home screen, or anywhere in supported apps like TV and Music” — hints at future third-party integration, but do you know what I don’t want to have to ever guess at? Which third party apps will and won’t work when I tap-and-hold. I also have no idea how Apple expects to entice third parties to adopt it since they have been unable to get any of them to adopt the default player to support timeline scrubbing, or unified menus.
Apple also only shipped support for TV and Music, not Apple’s other first-party apps like the App Store, or Podcasts.
If you’re in the Amazon app and you want to use the new tap-then-hold dictation search then you need to navigate out of the application you’re in to the Home Screen, or TV app (or really the Search app, but that’s even more taps!) You can still hold to ask Siri to search for something though while you’re in an app that’s not compatible without needing to navigate anywhere.
If Amazon adopts the tap-and-hold to bring up their own Search view inside of the Prime Video app and only search Prime Video then that’s even more confusing.
It’s good that instead of an ephemeral search overlay you can drill down and refine your search results in the Search app, but the execution of this is obviously incomplete.
That’s not quite all there is to it though, because tap-and-hold searches aren’t just different implementations, they also show different results. Tap-and-hold is taking the words you speak and putting them in the Search app’s search box for you. Hold-down for the Siri orb searches some other, unknowable way. If you’re searching for a movie title you’ll mostly get expected results because both are being very literal with the title. If you use any of those genre searches with Siri they don’t work in the search app.
For instance, “Show me Christmas movies” will yield wildly different results because the Search app tries to search for every word of that like you’re using a search engine in 1998. The first result is a movie called Show Me Your Glory because the first part of my search was “show me”. Deep exhale.
Siri hold-for-orb will show you a vertical overlay with very common Christmas movies. Utterly, and perfectly, unoriginal Christmas classics. Exactly what’s expected.
To recap:
Tap, then hold Siri button on the Home Screen, in the TV app, or in the Music app and get your dictated search term piped to the Search app which can be navigated in, and out of. It can’t understand expected natural language requests that Siri understands, or any command that you would give to Siri. It won’t fall back to Siri, or even understand that you tried to ask it to do a command. It’s just a very literal search box.
Hold the Siri button everywhere and get ephemeral search results that will not be in the Search app, and will disappear when you navigate away, just like it’s always done. This is the only way to do natural language searches or issue commands.
So now this one button does two things, and does them differently, and you need to decide what kind of thing you want to do.
This is like taking two half-baked things and putting them together to make one whole-baked thing. That’s not how that works.
Ideally, you would not have tap-and-hold at all. Just hold. You would ask Siri something that was recognized as a search, as it does now, and the search would be displayed in the Search app, using all the same natural language searching that Siri already does, but the functional filtering and navigation that the Search app has and the ephemeral overlays don’t.
My test Apple TV hardware for tvOS betas is an Apple TV 4th generation, renamed to the Apple TV HD, and introduced in 2015. Apple sold these until 2022. They’re not going anywhere, and they get tvOS 17.2 just like every other Apple TV. My living room Apple TV is a 2nd generation 4K introduced in 2021. It doesn’t go on the betas because that’s the one my boyfriend uses and I don’t want to hear about it if the beta breaks something.
The beta broke something. When tap-and-hold Siri buttons shenanigans happened the other week my Siri orb search overlays stopped working correctly. I filed a Feedback (FB13456609) last week when the release second candidate appeared, and it was still broken, but I heard from other people that it worked fine for them.
I recorded a video today, with the Apple TV running the official tvOS 17.2 release if you really want to see it. Now that it’s official, I installed it on the Apple TV 4K in the living room and that one does not have the bug. Near as I can tell the only thing different between the two is the model. Same user account, same network, same everything.
But it can’t be some widespread issue affecting the 4th generation Apple TV because that would be silly — beyond ridiculous. Maybe it’s some bit of cruft left from a beta install that’ll just disappear in a January point release, but it doesn’t fill me with confidence that I participate in the Apple TV beta process, file a feedback, and the thing still ships anyway. Why waste my time writing up bugs? The Feedback site even stripped all the new lines out of what I wrote.
Do I bother writing up the Music search bug and generating a sysdiagnose again on my 4th gen Apple TV, or should I just cross my fingers?
I hope that 2024 gives Apple employees more time to refine what they shipped here in 2023, but I don’t feel like it’s helpful to ship tvOS 17.2 in this state. Breaking the Siri button into two functions definitely didn’t need to be published before everyone went to this year’s holiday party. I don’t understand the motivations at play unless it’s about looking good in a demo to person higher-up the ladder.
As I’ve repeatedly said, there’s plenty of work to do in tvOS, and the TV app. It’s not like I object to people working on it, like it’s some perfect thing, but I don’t want to have to explain to my boyfriend that he’s using the Siri button wrong.
I really do want a unified, pragmatic approach to home media. Searching, browsing, live TV guides, unified profiles — everything. Not just a selection of Apple’s interests shoved in front of what I want to do, and certainly not in a haphazard, impersonal way.
People often reply to my critiques of tvOS to say that at least it’s not Fire TV, or Roku. That Apple cares about the user experience. While Apple is nowhere near as bad as Amazon at monetizing those eyeballs, that doesn’t mean Apple is shipping a product that’s beyond reproach. Even if Apple TV+ shows and movies are critical darlings, that’s no justification for degrading the user experience, and it has nothing to do with how Siri and Search work.
Whether the next bake is 17.3, or 18.0, I hope the people at Apple get the resources and time they need, and they’re not just pulling it out of the oven because they’re out of time.
My pal — Academy of Motion Picture Arts and Sciences Todd Vaziri — posted on his blog about something that really bothers him, and also bothers me.
Folks who follow me on Twitter (currently known as X) are probably aware of my years-old, depressing, frequently updated and repetitive thread pointing out studios and filmmakers downplaying or outright lying about the use of digital visual effects on their projects. “We did it all for real!” is the message given in interviews, production notes and featurettes. The truth is these movies frequently contain hundreds or even thousands of digital visual effects shots, and sometimes the sequences they’re directly referencing are made entirely out of digital effects.
Todd’s most egregious example is Gran Turismo and he goes into detail about it.
Another big one for me though, was Top Gun: Maverick. Todd graciously invited me to attend the Academy bake-off (where a small group of VFX supervisors for the current years’ films pitch the visual effects branch of the academy on their films to try and earn a spot on The Academy Awards’ voting ballot) and I got to see excellent work on a wide variety of films, one of them was Top Gun: Maverick.
There was a dazzling reel of all the work that went in to making that movie, and an interview process with the VFX supervisors and leads after it was finished. It was fantastic and it pained me that the forces responsible for marketing that movie chose to suppress this and spin a narrative about real planes.
There were absolutely real planes, and they did amazing work, but creating fully realistic VFX to integrate seamlessly with them is also work worthy of praise. Everyone should get the chance to be proud, instead of enduring public shame for their craft because someone thinks it sells more tickets.
My boyfriend is very fond of travel, and I definitely skew towards being less adventurous. Something I’ve been thinking about on our trips over this past year is how the tech that I take for granted in my everyday life has to be adapted for life on the road. Things are very different from when I took my first international trip to Paris in 2011. Why not write up a few observations about my most recent trip to Paris in 2023?
The one constant in trip planning for many years is Google Sheets. My boyfriend will start a new Sheets document and put in travel dates, restaurant reservations, flight numbers, train ticket info, whatever. Sure, it would be better to put it in a calendar ahead of time, but it’s easier to think about the timing when you’re not trying to map the time zone difference in your head. You just write it out, and it’s in a grid, so it just makes a natural kind of sense.
An innovation for me this year is using Mercury Weather to keep an eye on the forecast as we get closer to our travel date, as well as while we’re traveling. Mercury isn’t my preferred domestic weather app, which is CARROT Weather, but it has such a compelling trip forecast feature that I can’t imagine traveling without it now. I first used it this summer on a trip that spanned, London, several Greek isles, Athens, and Vienna. This trip I used it to see Paris, Strasbourg, and Paris again (return flight) all plotted out.
I keep Mercury’s 8 day forecast widget in the Smart Stack on my iOS Home Screen, along with Carrot. In the week leading up to travel I’m conscious of every fluctuation and trend in the forecast while I’m just generally using my iPhone. During travel it helps keep tabs on the next destination, as well as showing the forecast at home. Carrot just shows the weather for where you are right now, which is still useful, but it doesn’t know that I will be in Strasbourg two days later, or at home four days after that. Weather conditions could be similar, or quite different, and it’s good to keep that in my peripheral vision. I’d have to open the app and query each of those things if I wanted to know.
Technology on flights is weird. All the planes we were on for our trip offered some fairly expensive Wi-Fi, but the Wi-Fi actually worked for streaming media over the Atlantic Ocean, so how can you really complain about that?
The reason I needed to stream was because I perpetually forget that in the age of streaming my iPhone really doesn’t have media downloaded on it like it used to when I’d plug an iPhone into a Mac and sync with iTunes. It would be nice if Apple could proactively download things, especially when it pulls down data about flight tickets from my email. Amazon does something with proactively downloading things on Fire tablets, for example, but it’s to promote Amazon’s interests more than it is to help you.
The Apple Watch could also use some understanding of trips too. It gets locked into the time zone at the start of the flight, but at a certain point you want to know the time at your destination. You can change your watch face to show a World Clock Complication for a location, but that’s all manual. United Airlines recently added a Live Activity to show flight progress, but not all airlines (Air Canada) have Live Activities, and the Watch doesn’t show Live Activities as Complications.
One thing I make use of on a flight is Do Not Disturb. There are various automated notifications that can still pop up, with or without Wi-Fi which can wake you up if you happen to finally fall asleep.
I really wish there was a Travel Focus Mode — as you may have guessed from some of my comments above — where it could better contextualize information I need or want when it knows I’m traveling great distances. I couldn’t even figure out how to make my own Focus Mode to do something like that. I can make a focus mode that I invoke to mute notifications, and such, but it’s manual, and doesn’t do anything with crossing time zones. It’s not about an app state, or geolocation trigger — I’m in an airplane, or about to land, etc.
I remember the very first time we landed in Paris, with my iPhone 3G, and Jason’s iPhone 3GS, and we desperately avoided using any cellular connectivity. We’d take screen shots of the Maps app (back when it was Google Maps) while we were on hotel Wi-Fi and then try to interpret those screenshots for directions when we were walking around Paris.
SIM card swaps always seemed daunting, but fortunately my cell provider added a fixed day rate for international roaming, and I’d much rather use that. Same phone number, same everything, but I just budget it in to my travel expenses. You do you though.
Speaking of directions: Apple Maps gives pretty good directions in Paris these days. Not great directions, but decent enough. The killer feature is Apple Watch integration for directions, especially now that iOS 17 added the little mini-map view on the wrist to help orient yourself at crowded or confusing intersections. I traveled with iOS 16 in London this summer and I kept winding up on the wrong side of the road because the old way really didn’t provide enough context.
Google doesn’t have anything to compete with that, but Google makes up for it with richer location data than Apple. I kept bouncing back and forth between Google Maps for planning where to go, and Apple Maps for directing me to that location.
Domestically, Apple Maps location data is typically acceptable enough that I don’t worry about opening Google Maps, but internationally, at least where I’ve been in Europe, Apple Maps has a fairly shallow data set. Last November, I filed a whole slew of Maps Feedbacks about location information in Paris being in Japanese, and locations having multiple listings with some being in Japanese and others in English. It was like there was some database merging error. This November, fortunately, the only destination I came across in Japanese was a Japanese tea shop from Kyoto with a location in Paris. Way better.
Apple doesn’t have as many restaurant reviews as Google, or photos. Google also has their estimations of how busy a place is presently, and the daily trends for how busy it can be. Again, nothing like that exists for Apple.
Both could use some improvement with the Paris Metro though. There are multiple entrances and exits spread over a large area of the surface, and on several occasions both apps would say to go to a Metro entrance that was farther away, across several lanes of traffic, rather the one literally next to us.
Walking directions for both apps would frequently be in agreement with one another, but occasionally Apple Maps would want us to cross back and forth across some diagonal roads rather than stick to a side for a more logical interval.
Directions for both apps are inadequate inside of train stations and airports. This is a domestic and international issue. Apple brags about the data they have for airports, frequently throwing up a notification that it has a detailed map of the airport you’re in, like LAX, but not CDG. However, Apple has no 3D understanding of the airport, or any kind of route guidance. Once you zoom in to an airport, or train station, Google Maps will show a little level selector in the interface to let you see the floor plans for slices of a terminal, but they also don’t provide walking directions inside of the airport structure.
Directions inside of an airport can be pretty important when an airport is unfamiliar (or terminals recently changed) and it can be helpful to know where an airline lounge is relative to your departure gate. Where is border control situated?
Train station information in Paris and Strasbourg was also inadequate with businesses in real life that weren’t marked in either Apple Maps or Google Maps, but Google had the upper hand on Apple when it came to showing the train schedule while you were looking at the train station.
Apple has improved the Translate app for iOS, but it’s not as good Google Translate, in my opinion. It also has a weirdly opinionated design that doesn’t click with me. Niléane at MacStories wrote up her assessment of it a while ago.
I’ll keep trying with Apple’s Translate app, but fortunately Google Translate’s not going anywhere.
They both work for rehearsing how I should order a croissant while I’m walking to the store in the morning.
What would a vacation be without photos? We take them for our own reference, to fortify our memories, or to share with others not on the trip. Also, sometimes we have to economize how much gear we’re dragging around.
Peak Design 3L sling is just the right size to hold my Sony a6400, the Sony 18-135 F3.5-5.6 OSS, and the Sigma 18-50mm F2.8. Zooms are easier when you don’t have time to fumble with changing lenses, especially fast zooms like the Sigma. Of course there’s the Lightning to SD adapter that I’ve talked about before.
Sometimes that’s even too much, or doesn’t quite fit the bill, and the iPhone 13 Pro enters the fray.
I like Ben McCarthy’s app, Obscura. They put a lot of effort into it, and it shows. I use it as my “I’m trying to take a nice photo” app, and I leave it in RAW mode, where the Camera app is never used in RAW mode. Separating it like that helps me mentally keep tabs on what I’m doing. It’s most helpful in a situation where I don’t have my a6400 on me, and I don’t want the Camera app’s intense noise reduction to smudge everything to hell.
Spectre is an old app, but it still works. I still have the same complaint I’ve always had, which is that I want to be able to pick a point for it to 2D-stabilize to because I just can not hold the damn thing still enough, and I’ll be damned if I’m setting up a tripod. It does help at sunset, if you’re trying to get info in your shadows, and the Camera app just absolutely does not want to cooperate.
The mobile app got a significant refresh in October that really cleaned up the interface. The focus effect stuff is garbage, and I have absolutely no idea why anyone would use it, but Lightroom remains the best way to process and handle RAW files on an iPhone. In some cases, a better way to handle iPhone photos too.
I copy over my RAWs from my a6400 to an album, and pick through them for the ones I like, and then export them to the Camera Roll. It all works just fine, even on international roaming when the hotel Wi-Fi craps out.
A lackluster feature of the fall revision is HDR support. It has it, but it can’t render it out to HEIC, just AVIF, and JPEG XL. They all have quirks so use HDR editing with caution if you’re used to a more predictable workflow.
I’m grateful I have the opportunity to travel, even though it can be stressful, but the technology I take with me is definitely improving over time to make travel easier. It’s also possible to see a future where my iPhone and Watch might understand context, some kind of virtual assistant, if you will. It’s not about explaining every location I’m presently in, as if I’m not going to move at all, ever again. Figure out where I’m going, and when I’m going home.
If there’s one thing about Apple Music I’ll never understand it’s Apple Music Replay. It is once again time for the annual user-experience tire-fire. In the Apple Music app, you scroll all the way down to the bottom of the Listen Now screen, or you find it in your top row somewhere. I guess Apple might still be sending push notifications about it, but I turned that trash off awhile ago.
The tile implores you to “Replay and share your year in music.” In tiny text, seemingly indicating shame or remorse, it says “Go to site”.
You see, the Apple Music app - and iTunes before it - are largely glorified markup viewers, but for whatever reason, the Music app still can’t display Apple Replay in the Apple Music app. Instead the user is shunted off to the web version of the Music app.
Not a big deal, right? Except you have to log in with your Apple ID in your web browser to see the web version of the app you were just in so you can look at text and images with CSS animation. Does the Music app not pass Acid3? For all the crap Apple, and its fans, level at Electron based apps we’re left with this native app’s sweet solution.
What are we doing here, again, where I have to sign in to a second location to participate in what is essentially web marketing?
It is also incredibly fragile web marketing, because if you click, or tap, on that little hamburger menu in the Replay page it will show you “Select a year to see your listening[sic]”. The “‘22” doesn’t work. The text “This is your Replay.” strobes on an endless loop and nothing happens because no one at Apple thought that the page from last November should still work.
Once the user is logged in, they can play a highlight reel, based off of Snapchat, and Instagram Stories vertical “video” format — which looks great on my MacBook, BTW. The thing is just a carousel of animated stuff.
It seems to be like something one could upload to a Stories, or other vertical video product to share, but there are no sharing controls in a desktop web browser at all, so you have to go to the site in Safari on iOS, to get a share icon to save a static PNG to send somewhere else. There’s something poetic about failing to do something social well and using the format pronounced “ping”.
It’s like when your parents would buy you those electronic toys that weren’t real video game consoles or computers — Tiger Electronics shit — and you were just supposed to pretend it was a GameGear.
You can’t even share this with the people in your Apple Music social graph. I guess all anyone really needs from their friends is the dreaded Friends Mix, eh, Apple? I mean why would the people I’m following, and who follow me, on Apple Music want to know anything about my Apple Music if I chose to share it? Why not just keep shoving random shit I’ve listened to at them?
Back to the content of this not-social social media reel, which is also underwhelming. This is like a form letter from an actuary. The number of minutes of music I listened to? Wow, insightful. The genre I played the most being “Pop”? No way! I bet no one else has that!
Also whoever was setting this year’s mail merge left the rank number on the “Top” slides. The slides only have one thing on them, and already say “Top” so the enormous “1” in the lower right hand corner adds nothing.
This whole thing feels like someone was very excited to animate things, move album artwork around, and transform data, but no one really gave much thought to what this whole thing is supposed to mean to someone. How it makes someone feel.
There is actual information once you get out of the Highlights Reel and scroll down, unfortunately it’s in sideways carousels.
That’s how we all like to read lists, right?
For Mac users you can top the “>” and waiting for an animation to slide the four items across the screen. Then tapping “>” again for the next four items. Hopefully you don’t want to go back in the list because then you need to move over to the other side of the carousel and tap “<”. Don’t worry though, you can scroll the list sideways without having to tap, but the page doesn’t load more than four-to-six pieces of album art or auto-playing artist portraits so you need to stop scrolling and wait for the list to finish loading.
On iOS, it’s a slide carousel, with five items listed vertically and you swipe horizontally. You have the share icon, which you do not get on the Mac, but it only saves a PNG of the five currently displayed items, and not the whole list of 15.
This is using the same accursed design metaphors in the Listen Now page in the Apple Music app, here in the web browser, where a list could be anything, even something readable, but it’s not.
I should mention that even though this is all designed to look like Apple Music’s Listen Now interface, you can’t listen now to anything that’s on this page. This is not interactive. (Rubs temples.)
Spotify Wrapped absolutely trounces Apple Music every year when it’s released. In a few days social media will be flooded with happy Spotify customers sharing their Spotify Wrapped with everyone else. It’s a great advertisement for Spotify, and the attention is well-deserved, because Spotify puts far more effort into crafting Wrapped than Apple puts into Replay.
Who knows what new ways Spotify will come up with to absolutely embarrass Apple Music subscribers this year? Even if it was just what they did two, or three years ago, it would be more than Apple’s done this year, or is likely to do in the future.