Unauthoritative Pronouncements

Subscribe About

Logitech’s Mouse Software Now Includes ChatGPT Support, Adds Janky ‘ai_overlay_tmp’ Directory to Users’ Home Folders ►

Stephen Hackett noted that Logitech has added some extra bullshit to their bullshit Logi Options+ app in a post you absolutely should read.

I cannot tell how little I want THE SOFTWARE FOR MY MOUSE to include features tied to ChatGPT … let alone a mouse with a built-in button to start a prompt.

You need this garbage app to take full advantage of your Logitech hardware. It’s a shame because the MX Master 3S is an excellent mouse. My favorite mouse ever.

What makes it my favorite mouse ever is the thumb-button and its accompanying gestures. Click that paddle-ish button once and you get Mission Control. Hold it down and swipe left and it swipes to the Space “to the left” the same goes for the right.

screenshot of the Logi Options+ app showing the gesture controls.

Most people don’t think they use Spaces, but every full-screen app is a Space. It’s true. I deal with a lot of full screen apps. My employer used remote desktop software, and the best way to use that is full screen.

It’s pretty indispensable to be able to hold and drag to pop from app to desktop to app.

I might have gone for a couple months without even noticing the AI cruft, because I don’t launch Logi Options+, but I would eventually have noticed the folder, or the running process, like Stephen did and waste my time trying to figure it out.

In theory it’s not hurting anything because it’s not doing anything, but in principle my computer is not at Logitech’s disposal. Much like my recent complaints about YouTube, or all the other companies, my devices are mine. It stinks more of Adobe than YouTube though. They’re not angling to sell ads, they’re trying to appear trendy and relevant.

I tried uninstalling Logi Options+, and then installing SteerMouse, like Stephen did, but SteerMouse doesn’t have the gesture support I am accustomed to. I heard some people used the “Chords” to switch spaces, but I didn’t want to relearn how I used the mouse.

I tried to use Karabiner Elements next. Someone with a more sophisticated background in computer programming might be able to figure that out, but there’s nothing more I could seem to do than what SteerMouse did with setting the thumb button to trigger one thing, and no gestures. If anyone happens to figure out how to reproduce the gestures in Karabiner, get in touch.

Fortunately, Stephen updated his post with some reader feedback that it’s possible to edit the JSON file in Logi Options+ and it won’t run the extra process, or create the tmp directory.

I reinstalled Logi Options+, set up my mouse again (because I have always refused to create a Logi account to sync settings. Like they won’t abuse that), edited the JSON file from true to false and turned off automatic updates.

I had also considered downloading the offline version of the Logi Options+ app that Stephen linked to, but at least with the way things are my mods are very undoable if I do need to update the app.

Like I said in my previous posts about why this crap happens, the people at Logitech talked themselves into how this was actually a good thing that they were doing. Why wouldn’t people want this (poorly implemented) additional feature?

To anyone suggesting that I throw the best mouse I’ve ever used in the trash over this, think again. To people that think I should retrain myself to use a $129 Magic Trackpad set up to the left of my keyboard to switch spaces, I ask, “in this economy?”

This is a case where I want my consumer electronics to be an appliance, not a platform. I’ve gotten it back in line, and that’s that.

Sometimes a mouse is just a mouse.

2024-04-25 11:00:00

Category: text


Does Everything Need to Be an Ad? ►

Increasingly, every pixel in front of our eyes is fought over by a pool of large technology companies that are trying to squeeze fractions of cents out of ads and promotions.

There’s a lack of care and thoughtfulness about all of these moves. Instead, there’s just an assumption that as long as they can pry someone’s eyes open, “Clockwork Orange”-style, then they’ve helped activate those reluctant viewers with brands.

Read the rest on Six Colors.

2024-04-19 15:00:00

Category: text


Apple TV 4K 18 Months Later: I’m FED UP with TVs! ►

I came across this video from Kyle Erickson on YouTube. The video follows the general YouTube tech video template, and Kyle has some Canadian flare so you’ll see him use apps for services like Crave.

The premise of his video is that he’s fed up with the embedded software in his TVs, specifically Roku and Tizen (Samsung) — though he does briefly mention Fire TV. His solution was to add an Apple TV to those HDMI inputs in his home. You can’t argue with that logic in this day and age.

However, he skips over the TV app in a way that serves the premise of the video, but not potential Apple TV buyers.

He uses the home screen (as I do, and I recommend for everyone) but he doesn’t point out in the video that he’s changed the default behavior of his remote to use the home screen over the TV app.

It would be better, maybe, to detail the steps to go through to get Watch Now to appear on the home screen, for example. When he bemoans the apps in the interfaces of the embedded software systems, he doesn’t mention that the TV app is a billboard for Apple TV+ first and foremost.

He also highlights that Apple supports older hardware with new versions of the OS and that a new version of the OS will be coming at this year’s WWDC. That needs a qualifier that Apple might finally pull the trigger on the TV app being the new home screen, and with the TV apps heavy emphasis on Apple TV+ we might all be in for a rude awakening.

Having said that, I completely agree with Kyle that people should buy an Apple TV these days if they can afford it. Apple TVs are better in a lot of ways thanks mostly to competitors worsening their products, not because of improvements Apple is making. Apple should be doing a better job here, even when, as Kyle notes, the competition is so awful.

2024-04-16 13:00:00

Category: text


YouTube’s Screen Stealer

Something's distinctly different about this "aerial" still frame.

Yesterday, I had the YouTube app open on my Apple TV in my office. I went to do something else, and when I looked back it wasn’t the Apple TV aerial screensaver, but a YouTube app “screen saver” with a slideshow of heavily compressed still images.

The Apple TV in my living room had an older version of the YouTube app (presumably from April 2nd if the dates in the version names are to be believed.) That version didn’t try to override my screensaver like the one in my office.

It seems the YouTube app, if the app is open, will start a screensaver slideshow of generic still images taken from videos if there isn’t a video that I’ve paused.

Look at the compression shred this still.

If there is a paused video it will be a slideshow of the YouTube thumbnail art endlessly zooming in, fading to black, and starting over.

Some of the worst sins of mankind exist in YouTube thumbnails, and they’re not designed to be screensavers.

You know what is designed to be a screensaver? The Apple TV’s aerial screensaver. Far and away the most lauded feature of the Apple TV platform. Beloved by all (except people that get creeped out by jellyfish) and yet replaced by either chunky-compressed stills from drone footage, or looping thumbnails.

To top it all off, it has static, white text (famously the best to use for screensavers) for video details, static YouTube logo, and a graphic for the directional pad, indicating that pressing the top will start playing the video being used in the screensaver. If you have an older Apple TV in your house (my office Apple TV is the 4th gen one) with the awful touchpad you can even trigger it when you try and pick up the remote from the furniture.

UPATE: Thomas did some further testing of his own, and apparently the fake screensaver will show media controls on devices you have connected to your TV (iPhone for a remote). According to him, if you let the fake screensaver keep running it will eventually revert to the Apple TV screensaver, and then the Apple TV will sleep.

YouTube’s not the first company to “innovate” in the screensaver space, and it’s not exclusive to the Apple TV. A few years ago my boyfriend had a set-top box that would initial start a slideshow of ugly nature photos, and then after a while they started dropping in ads into the slideshow.

I fully expect YouTube’s aim here is to capitalize on all this “free” real estate and start sliding in ads, promoting specific videos from partners, or showcasing movies available to rent or buy. I know that’s cynical, but so is YouTube as a business.

Setting that aside, I pay for YouTube Premium because the ads are so awful I can’t stand them, but because there’s some pretense of this being a screensaver I still get these slideshows that are the future home of ads for other people.

On April 4th, Janko Roettgers published a piece about a recent Roku patent filing about injecting ads over the source input feed when the source input feed is paused. The patent also says the display device will discern content and context to place a relevant ad over the paused video stream.

As Janko points out, Roku already monetizes screensavers on its platform. Feel free to peruse Roku’s site where they brag about chunks of the screen they’re willing to sell.

I don’t think advertising, in the abstract, is evil. I do, however, think it’s insidious to inject advertising into every pore —especially when those pores don’t belong to you, like when you’re the YouTube app, not even a platform.

Screen Saved

If you want to bypass the YouTube app change, I heard from Rob Bhalla on Mastodon that if you change the screensaver to start at 2 minutes, instead of the Apple TV’s default 5 minutes, then the Apple TV’s screensaver kicks in before the YouTube one and you never see theirs. Thanks to YouTube not having any screensaver controls, and no idea what your actual screensaver settings on the Apple TV are, they hard coded in a start time just prior to the default 5 minutes.

That’s how you know it’s really there to help improve user experience, and not just a craven money grab by absolute hacks.

Sure, they might change this to start even earlier. In which case, we’re at the mercy of Apple to protect customer experience. I wouldn’t hold my breath on that one. App Store rejections are for indie devs, and people trying to skirt giving Apple money, not YouTube overriding the screensaver.

2024-04-12 02:30:00

Category: text


From LA to Tokyo

For Jason’s birthday we traveled to Japan for 10 days. It was my first time anywhere in Asia, and it had been a long time since Jason was last in Japan, so we had some experiences that were continuations of what I wrote up last Fall from our trips to Europe, and also things that were quite different.

Planning Ahead

Google Sheets

Just like before, Jason dumped all the trains, flights, hotels, and dinner reservations into Google Sheets. I still have a hard time quickly accessing that data when we’re “in the field” so I’ve made some adjustments.

TableCheck and Omakase

In America, almost everything is OpenTable, Resy, or Tock for reservations. In Japan, the dominant players seem to be TableCheck and Omakase. The real fancy places tended to use Omakase for reservations. Jason handled booking all the reservations because he was in charge of planning out the schedule for what he wanted to do. For weeks he’d message me, or mention in passing, his frustration at trying to score these reservations. According to his experience (and briefly verified when I went to try to book a few things as a test) the sites are not great. Neither worked for me in desktop Safari on my Mac, with TableCheck only rendering a blank page, and Omakase showing all of the possible menu dropdowns that are hidden by JavaScript, but none of the dropdowns worked. They all worked fine in mobile Safari, and in desktop Chrome. I’m not using an ad blocker or anything.

However, just loading doesn’t mean the site works. For one restaurant using Omakase, the one we went to for Jason’s birthday dinner, dates show in pink that are available for an “instant reservation” but that’s not actually true, and you’ll get “Please choose a different date” so you just need to keep clicking until you find one of the ones labeled to work that actually works. Then you need to select your party size, but SURPRISE, you can get a “Request for this number are not allowed” error because they only have seating for one, not two.

It’s all this weird game of whack-a-mole until you get a working reservation. It is absolutely not how reservation systems in the US work. It would be better if could handle these reservations through another intermediary, like Apple Maps or Google Maps which both have reservations capabilities.

Oh but they do show you a reservation button in Apple Maps for some of these restaurants, like this Omakase one. However it’s OpenTable, not Omakase. The OpenTable sheet errors and says there’s no table found for [date here] and says “Try another date or party size.” However, even if I select exactly the same date and party size that works on the Omakase website the OpenTable one always shows the same error. It’s the same error for any day I tried. It would seem that OpenTable doesn’t actually have the ability to book anything, but instead of not listing the restaurant, they’ve created some sort of psyop against would-be diners.

Google Maps, doesn’t pretend that you can reserve a table for that restaurant in the app. It has “Find a table” lower down in the location listing. That offers up opentable.jp (all in Japanese, and just as non-functional as the embedded OpenTable sheet in Apple Maps), omakase.com (exactly the same function as the mobile site), tablelog.com (all in Japanese, and says to call the restaurant to reserve). So there’s a needle in the haystack here, and that’s the Omakase site with it’s whack-a-mole reservation system, but it’s silly. For a company that gets all its data from scraping the internet, it should be able to see what the restaurant web site actually uses for reservations.

In another example, a less fussy restaurant (Bills Ginza), has a “Reserve” button in Apple Maps that doesn’t work. It does nothing if you press it. In Google Maps the reserve button (up top instead of just the “Find a table” button lower down in the previous example which seems to indicate that there’s an in-app interface to use) brings up a sheet that plausibly shows party size, and times in partnership with Ikyu. The restaurant’s website wants you to use TableCheck, which neither Apple Maps nor Google Maps directs you to.

Variations on this go for every restaurant Jason booked on our trip to Japan. It is entirely possible to book reservations, because Jason managed to do that with grit and perseverance, but there’s room for improvement here from everyone.

Drafts

Jason was interested in watching a lot of YouTube videos about people currently visiting Japan, since he hasn’t been in many years. We’d get a sense of what was popular, and also the vibes. While we were watching the videos, I entered names of coffee shops, camera stores, or sites to visit, into Drafts tagged with “Tokyo” or “Kyoto” so that while we were watching over the last few months I could add to the running document and reorder things without cluttering the more rigid schedule. You hear that, Greg? I actually used a feature and didn’t just dump text in!

These were all things that would be nice to do, but not requirements. In fact, we weren’t able to go to most of them, but at least I have them saved if there’s a future trip to Japan.

Google Maps Lists

Before, and during, our trip I took all the locations we would be traveling to from the Google Sheets document, and my Drafts lists of possible places to visit, and I created Google Maps lists for each city. It’s very easy. You search for the place, hit the bookmark icon to save it, and pick the list (or create it). You can enter an emoji for each list so that when you’re looking at Google Maps you’ll see that little emoji dotted everywhere. Also you can browse the list, or just start typing the name, and the saved item will come up first.

This is not a revolutionary new technology, but it was something I deployed to great effect on this trip. Jason didn’t hate it!

I’m an Apple Maps apologist in our household. When Jason drives, he uses Google Maps with CarPlay, and when I drive I use Apple Maps. My default is to use Apple Maps. However, when traveling, that’s not the case.

There is a similar “guides” feature in Apple Maps, but it’s not as good. Google Maps lists have a notes field, which is helpful for remembering why the place is saved. Apple Maps guides do not. Lists put that emoji on the map, while guides use the default business type icon (the purple hotel icon, for instance) and places a tiny little white circle badge with a star on it over the upper right corner of the business type icon. It’s visually cluttered, and makes a map that is not glanceable from a high-level.

When searching for a business, like your hotel which is part of a very large hotel chain, Google will show the one saved in your list as the first search result when you start typing. Apple Maps will show you the search results in the same order you’d see them otherwise, but it will write “in your guide” under the hotel that could be further down the list. Thanks?

Most importantly, guides can only be shared one way, like a published document, from me to Jason. Lists can be shared and jointly edited, so Jason is able to add what he wants to the list without me needing to act as a go-between. If he came across something he wanted to stop at, he could just add it to the list and there it was for the two of us.

No, Really, Apple Maps Is Bad For Planning

Apple Maps is also bad if you move the map to an area and want to search within that area. It’ll snap back to where you are and search that area first. Don’t you want to find coffee shops near you? No! I moved the whole view over there! Search there!

Then you have to move the view back to where you wanted to look, and hopefully it will trigger the “search this area” button to appear. Sometimes it doesn’t! This is not an international problem, it’s a global problem, but it’s especially frustrating when planning ahead of the area you’re currently occupying.

Also, if you’re planning weeks before your trip, you’ll be surprised to find an error message when you connect your iPhone to your car that “Directions are not available between these locations.” Not to contradict Charli XCX, but there’s no fast lane from LA to Tokyo, so why would this ever be something I would want? It corrupts both my planning experience, and my current experience, because it’s goofy as hell. I don’t want to keep using the app to plan if it’s going to do that.

I’m Not Kidding About Apple Maps

After we got back, I updated to the latest version of iOS, and I was greeted to this error message:

Uh…

What?

No, seriously, what?

The whole point of having offline maps is so that I am not at the mercy of a network connection. That’s its raison d’être! It’s downloaded.

If I had upgraded from 14.4.0 to 14.4.1 while I was traveling I would need to catch this error with enough time to re-download my offline maps, especially the offline maps for the city I was in.

Google Maps doesn’t do this! Why? Because it’s bonkers as fuck, that’s why! Why am I even getting an error message about this? Why aren’t people just fixing this?

Mercury Weather

This is still working exactly the same as my other trips. Before the trip I could keep an eye on what the weather would be like for what clothes I needed to pack, and during the trip I could see if there were any drastic changes coming up as we moved through the country. Which days it would rain kept shifting during the trip.

I’d still like something more granular for days with multiple cities, and I would love a system-level trip mode that could understand I’m not a persistent, stationary object when it comes to upcoming calendar events and weather forecasts.

Up In the Air

For this trip I planned a little better and downloaded some things ahead of time. I had assigned myself some homework —watching Lost in Translation— so I had that downloaded to the TV app. I had downloaded some music too, unlike last time.

Flighty

This was a new addition to my life after I had written up about my trips to Europe, but this wasn’t the first trip I used it on. United’s live activities offer a very similar live activity experience to Flighty, but not all air carriers have a live activity, and Flighty’s flight timing info is more extensive if you’re curious about how often a flight is on time, or delayed. Both of them nonsensically show seconds in their countdowns, which not only shifts all the text based on the character width of each number changing every second, but it’s also absolutely useless to know.

Watch

It still bums me out that the Watch just thinks I’m in my home time zone through the whole trip until that moment we can activate cell service and then it snaps across the globe.

The Face ID mask+watch combo doesn’t work when your iPhone and Watch are in Sleep focus mode. Which is still annoying. You can drop your mask to get the iPhone to unlock, or you can exit Sleep mode and get pinged by notifications when you do fall asleep. I understand there are security concerns about someone using Face ID while I am sleeping, but I could be asleep when it wasn’t in focus mode too, obviously. The mode doesn’t dictate if I’m conscious.

All Roads Lead to Roam

Roaming is still the way to go for me. Japan has more ubiquitous free public wifi than I’ve seen anywhere else I’ve traveled, and sims are supposedly easy, but I’m there for 10 days. I just want everything to be with my phone number, and for things to work as expected when I need them. Sometimes iCloud Public Relay gets into a fight with Wi-Fi. It’s worth it to just make it easy with a fixed, daily rate.

Apple Maps and Google Maps

Like I said above in the section on lists vs. guides, Google Maps is more glanceable if you pre-populate a map with your unique emoji. The Apple business icons are generally too small, and low contrast for my liking. For some reason the ones that always visually stick out in dark mode are the bright pink icons for salons. Salon glanceability really has a time and a place, and it’s generally not on a ten day vacation.

Reviews

Apple Maps is not very good for English-speaking tourists in Japan. Apple Maps Japanese data is from its partnerships with local Japanese companies. That’s great for locals, but that means things like restaurant reviews are in Japanese. Again, this is helpful if you speak Japanese, and very relevant to the residents of Japan, but far less accessible to me, an English-speaking traveler.

You can tap on the review source, which will open the partner company’s web site, and show the reviews. Then you can tap the “ᴀA” in the upper right corner, and pick “Translate Page”, to have the translated version of the web page. Apple doesn’t offer to translate anything within Maps. The text in the Maps app is not selectable so you can’t use the context menu to translate. You can’t tell it to always translate non-English web site pages.

The Apple Maps info for locations in Japan. The review source web page view in Maps. The translated version of the same page.

You need to do all those steps each time you want to consult reviews.

If you’re trying to compare several restaurants around you, quickly, you’re better off not using Apple Maps.

This is different from Apple Maps in Europe, where the reviews shown to travelers are all from TripAdvisor. I hate TripAdvisor, and don’t find it’s data to be reliable because it’s something that can be easily gamed to provide mediocre places with excessively high scores. It is, however, 100% more readable.

The reviews that Google shows you in Google Maps are all from Google Maps user submissions, not partner sites, and it knows I’m not Japanese so it shows me reviews from other travelers, like TripAdvisor reviews, but for some reason the ratings seem to be more closely aligned with my perceptions than the inflated TripAdvisor scores of Europe. (Google Maps reviews are also better in Europe than TripAdvisor.)

Transit

It goes without saying that Japan has a very developed, and robust transit system with multiple rail lines and rail companies. Bustling, massive stations connect these lines. Apple Maps and Google Maps still fall flat on their faces when it comes to mixing walking turn-by-turn directions with transit directions.

The apps will provide the written direction “Walk to Shibuya station”. That’s it. Simply walk to Shibuya station. Here’s a dotted line if you want to look at it. It says “2” for the entrance. Good luck inside! It’s not a sprawling world unto itself!

Both apps really, truly, needs to provide the same level of care that they provide for walking directions to the transit directions.

We had some confusion on one trip in Kyoto because we couldn’t find a line we were supposed to take, but it was a JR line, not a city line, and that wasn’t marked in the apps. We had to look at a laminated piece of paper taped to a column to get the clue. That meant exiting ticket gates, and entering other ticket gates inside the same station because they were different lines. Neither Maps app helped.

Google and Apple both provide diagrams for the train cars, and highlight specific cars. Google says “Boarding position for fastest transfer” or “fastest exit”. Apple just says “Board” without qualifiers. If you’re someone who’s not used to rail you see that and wonder for a second what happens if you get on the wrong car?

I don’t like the ambiguity of why you’re doing a step. Explain it. Does the train car decouple? Is the train too long for the destination platform? Or is it just an ideal I can fail to meet like so many others?

I do appreciate that both apps have accurate calculations for fares. Japan uses IC cards that are pre-paid transit cards where money can only go in. Apple Maps has an edge on Google Maps because it has access to your IC card balance in your Apple Wallet and will warn you if your trip will exceed your balance. Google doesn’t have that integration on iOS, naturally.

Crowds

The crowds in some of these places in Japan are no joke. Google Maps has had the ability to show a little bar graph for every location for how busy a place is throughout the day, in addition to how busy it currently is. It’s had this feature since 2016.

Not much in Japan is open before 10 or 11 AM, but any culturally important site, like a temple or a shrine will be swamped by tour groups starting around 9 AM. It’s not always the case, and things fluctuate based on rain.

Google also does that for train and subway rides where it will inform you the train will be “Crowded” and you can mentally prepare yourself to be very close and personal with strangers before you get to the train.

Apple Maps has never offered any guidance for how busy a location is. The elastic shield of security and privacy offers Apple no cover here. Apple brags about how it uses anonymized data for real time car traffic. They can anonymize data for busy businesses, and packed-solid subways. No excuses.

Uber

We’re a divided household. I say Lyft, he says Uber, let’s call the whole thing off.

In Japan, they don’t have Lyft, but they do have Uber. It’s really a taxi-hailing app. This worked very well for us. We never really had problems hailing a cab anywhere in Tokyo, but we would have a kind of a back and forth about our requested destination that left us unsure if we correctly relayed the info. Uber alleviated that because it the destination was provided to the driver. Also we’d know, roughly, the fare range for our taxi trip.

There are other Japanese taxi apps, but we weren’t moving to Japan so we stuck with the one that already had all the account info, Uber.

Payment was a non-issue. Sure, it was nice that the Uber app took care of the financial transaction so you didn’t have that awkward pause at the end of every cab ride, but it wasn’t like America, or Europe, where you fretted the cab driver telling you his credit card reader you were looking at for the whole cab ride “wasn’t working” at the end of the trip.

When we hailed a cab the old fashioned way, they all took credit cards. Even the old men driving very old Toyota Crowns with lace seat covers had a credit card reader. They’d take more than credit cards, but that varied by cab. There seem to be a profusion of payment apps in Asia, and you’d see a cluster of them on a sticker on the door to every cab. Some indicated that they took IC cards, but we stuck to credit cards so we wouldn’t blow through our Suica card balances and need to reload.

The real downsides to Uber were that you had to meet the cab at a specified pickup point. It couldn’t be an arbitrary pickup. This meant that sometimes we’d have to walk up, or down, the block, or cross the street, and the pickup location had very little to do with the direction the driver was traveling, as no drivers had agreed to the trip yet, so we’d have a 50/50 shot that we were on the wrong side of the street for the pickup.

Estimated pickup times were also… ambitious. Pickups were generally 10-25 minutes and the app would quote you 4-8 minutes. Also some journeys the cab drivers just wouldn’t want to do, like if it was a pickup from a particularly congested part of town. When we left Kiyomizu-dera, the narrow roads were so congested with pedestrians no one accepted Jason’s ride request. We hiked back up to the taxi line and hailed a taxi there.

We did end up using Ubers and Taxis more often in Kyoto than in Tokyo (or our brief, one evening stint in Osaka). Kyoto’s public transit is definitely not as robust.

I wouldn’t say that I would expect to exclusively do a trip to Japan using Uber, but don’t be intimidated or fearful of using it, or having it as a backup.

Apple Wallet and Apple Pay

Speaking of IC cards, Apple Wallet let’s you hit a “+” button in the Wallet, and pick a transit card. It doesn’t explain anything about the cards all being functionally identical, so you need to do that research yourself, but once you pick a card it’s good to go and you can top it off with Apple Pay transactions. The money on the card can be used for transit, or anywhere you see the IC logo, Suica, Passmo, etc.

It’s handy because it uses the express mode, where you don’t need to unlock or authorize the transaction (you can change the mode if you want to). This makes it very easy to swipe when entering and exiting the ticket gates. It also shows that you’ve started a journey, or completed one with your new balance. Because you’re billed based on your entrance and exit points, it’s important to know before your trip how much it will cost. It’s not a flat fare. I managed to get it down to ¥63 (42¢) so I consider that a minor victory.

Also, just like other transit cards it can only exist on either your iPhone (where you probably set it up) or your Apple Watch, not both. It can be transferred between the two.

I’ve never understood this limitation with the transit cards. We do way more complicated things with credit and debit cards, but a prepaid transit card is somehow harder to use and needs to be gingerly passed back and forth between devices.

Tokyo Disney Resort App

I’m not a Disney blogger, but we did go to DisneySea for one day with some of Jason’s friends. They had small children so we mostly did rides for small children. Also the app was just generally broken the entire day. As bad as the Disneyland (California) and Disney World apps are, the Tokyo Disney Resort app is worse. I don’t measure a lot of things in life by their iOS App Store rating, but 2.8 stars seems like there’s room for improvement (especially when the five star ratings seem weirdly astro-turfy!)

When you open the app, there’s an animated intro with clouds that reveal the two parks, Disneyland and DisneySea. However, the animation wasn’t animating so I hit the “skip” button, thinking I was just skipping the unnecessary animation. However, the animation is part of a data loading process, so you immediately get an error that because you skipped the intro you need to hit a button to manually refresh the data.

Exasperated sigh.

Surely, if you know that I have skipped the intro, and are providing me with this error message, then you could just load the data instead of telling me I need to do it. What’s this app for if it doesn’t load data?

We tried to book a lunch, but the reservation system wasn’t working. We went and waited in line for the restaurant and then the reservation finally went through so we were able to leave the line and come back later. Presumably the 486DX that handles the reservation system was overloaded by the totally predictable and known quantity of daily park guests.

I also tried to scan my ticket in to the app, but it crashed the app. Seriously. Then when I tried to scan the ticket again it told me that the ticket had already been scanned. However, attempting to do anything like order snacks and drinks with the online order, failed because it said I needed to be in the park. I could not be more in the park.

The only thing I could reliably use the app for was to see ride wait times. Which is not as helpful as you might think without the ability to book rides with Priority Access or Premier Access because it never thought I was in the park.

I don’t know if the boondoggle of an app is the fault of The Oriental Land Company (which is listed as the owner on the iOS App Store, and owns the parks themselves), or The Walt Disney Corporation which seemingly licenses at least the app assets to them, but they should both be motivated to at least bring it up to par with the other bad Disney apps.

It’s a fun park though, with an incredible attention to detail, in all aspects that don’t involve the app you need to use.

Lost in Translation

No, not the movie. If only Bob Harris had access to Google and Apple’s Translation features he’d probably have been in a better mood.

Google continues to be my preferred translator, especially when using the Google app and the camera to do live translation. It’s just kind of clunkier to use Apple’s Translate app? I can’t explain it. I’m sure someone that’s an expert on user interaction models could articulate it better than I can but everything seems to be more taps than it should be, and also it seems marginally slower at the actual translation process.

Some of that seems to be where the buttons are located in the interface, but also some of it is Apple’s lack of auto-detect for translation so you always need to pick a language and a direction to translate. It can be good to have those explicit overrides, especially when languages have similar words using the same alphabet, but Kanji, Katakana, and Hiragana are very obviously Japanese.

Also each tab in the Translate apps interface needs you to enter the to and from languages. It’s not something you set once in the app. That means Translation, Camera, and Conversation can all have different languages selected making it take more time if you’re switching translation contexts, but not languages.

Because why wouldn’t I want to translate text from english to spanish, from japanese to english, and from french to english all depending on the mode of the app? WHO WOULD NOT WANT THAT, I ASK?

The camera translation in Apple’s app is slower than the camera translation in Google’s —at least on the Japanese product labels I was translating. You’d want to rotate the curved label to get the text as flat to the lens as possible for the best translation.

I wouldn’t say that Google trounces Apple in the quality of the translation, but Apple does tend to be a little more awkward and less helpful. Take this bag of Kit Kats, for example.

Apple on the left, and Google on the right. Which would you pick?

Apple translated the Kit Kat logo, which is just Roman characters in an ellipse, to “Kitkao” which is … unhelpful. Apple failed to translate the vertically oriented characters and instead just transcribed them.

Google translated it as “increase in revenue”. Based on context, it seems there’s an extra Kit Kat in the bag? Apple also translated the description as “The sweetness of adults” and Google translated it as “Adult sweetness”. Both sound bizarre, but I know from Rie McClenny, that Google is the literal translation, and it means the flavor is for an adult palette.

This is just one example, you can do your own research here and figure out how you feel about it, but I don’t see any reason to use the Apple app over the Google one in Japan, just as I don’t see any reason to use the Apple app over Google in France. It’s mostly an exercise with seeing what minimum viable translation you can conceivably get away with when you use Apple’s Translate app, and that’s never what I’m aiming for in any context.

Travel Photos

I knew I would need to limit what I was taking, again. That meant my Peak Design 3L sling, that fits my Sony a6400 camera with a lens attached, and one more lens. I put another lens in my carry-on.

Sigma 18-50mm at 37mm.

When I was in Tokyo, I almost exclusively used my 18-50mm Sigma F2.8 mostly with a circular polarizer. My 18-135mm Sony F3.5-5.6 was my secondary lens, but it stayed in the bag. It’s not as good as the sigma at it’s widest and it is a slightly larger, heavier lens than the Sigma.

Monkey over Kyoto. Sony 18-135mm at 76mm.
Sony 18-135mm

In Kyoto I switched to 18-135mm and Rokinnon 12mm F2 as my backup. I knew that I would need the extra reach from the 135mm, and I would also find myself in landscape and architecture settings that would benefit from the wider 12mm lens. It’s a completely manual lens, which is why I didn’t want to use it in Tokyo.

Sakura. Rokinnon 12mm.
Who says you shouldn't use wide angle lenses for portraits? Rokinnon 12mm.
Hokan-ji. Yasaka Pagoda. Kyoto. Rokinnon 12mm.
Hokan-ji. Yasaka Pagoda. Kyoto. Sony 18-135mm at 45mm.

Every night I would take the SD card and use the Lightning to SD adapter. The photos were imported into Lightroom mobile. I’d edit and use the flag (quick swipe up or down) to decide if I liked something enough to export it, or if it was something I’d never want to see again (like a shot that missed focus). I’d filter by flag, then select the photos -> Save to my camera roll as JPEG.

The Lightroom HDR editing workflow is, in my opinion as a hobbyist photographer, shit. It can’t export to HEIC/HEIF, DNG, or my original Sony ARW files so you can’t round-trip. The results don’t always look right, and you have no real sense of how they’ll look on an SDR screen. There is an SDR preview, but it makes everything look like trash, and the editing controls for augmenting the SDR preview never align to what the image looks like if I was just working in SDR the entire time.

I know that it can make things look less impressive because highlights, and sunlight, won’t pop, but I’d rather have a more predictable output. If you have an Instagram gallery that mixes HDR iPhone shots and non-iPhone shots and you want to convert the iPhone ones, just slide the brightness slider in Instagram’s editor one percent. Instagram hasn’t updated their editing tools in years so changing anything will convert the photo to SDR.

The iOS Photos app does a better job with HDR, because it handles HDR images from the iPhone’s cameras. It’s just a significantly worse photo editor, and organizer, than Lightroom.

There was dust on my lens in a few shots I needed to touch up, and Lightroom is the best place to do it. Retouch, an app I’ve used for iOS for years, has a resolution limit so I only ever use it for iPhone shots, and it bakes in changes when you use it. Pixelmator can touch up photos too, but it also bakes in changes. If I decide to revisit something about the photo I can do it as much as I want to in an app like Lightroom. Photos for iOS has no retouching tool, even though one exists on the Mac (it sucks ass, but it’s there).

I didn’t end up using any new apps on this trip. Things were going by fast so I either got the shot with my Sony a6400, or my iPhone’s Camera app.

Back Again

I’m still grateful I can travel, and that Jason drags me unwillingly to places that I end up enjoying. I can only imagine my trip would have been a little more difficult without an IC card in my Apple Wallet, or Google Maps to help me organize navigating the immense metropolis of Tokyo.

I absolutely want to go back to see, and experience more. If the apps all stagnated and stayed exactly the same, I’d be perfectly comfortable (except for the Tokyo Disney Resort app!) Ideally, things will keep improving, and maybe someday we’ll have AirPods that translate like Star Trek’s Universal Translator, and augmented reality glasses that do text translation when we’re looking at something. I would generally settle for things being 10% better next time.

2024-04-08 08:15:00

Category: text


Designing a Richer YouTube Experience For Your TVs ►

This blog post from YouTube’s Official Blog —authored by Joe Hines (interaction designer for YouTube on TV) and Aishwarya Agarwal (Product Manager for YouTube on TV)— is really something. I am not directing my ire toward Joe or Aishwarya, but to the awful business culture that fosters this kind of hot garbage.

More than ever before, viewers are turning to the largest screen in their homes – their TVs – to watch their favorite YouTube content from vlogs, to video games, to sports highlights and more.

And while watching television has historically been considered a passive experience, one where you can sit back and enjoy your favorite programs, we’re building one that is uniquely YouTube that gives viewers the opportunity to engage with the content they’re watching, even on the big screen. As watchtime on TVs has grown to more than 1 billion hours per day, we’re faced with a fun challenge: How can we bring familiar YouTube features and interactivity to the living room while ensuring that the video remains at the center of the experience?

Translation: They noticed people are watching more videos on their TV, and they want those people to multitask —not as a second screen experience that might not be tethered to YouTube, but directly to YouTube itself, on the TV.

When they refer to interaction it’s either about direct, or indirect clicks that turn into dolla dolla bills ya’ll. This is not “interaction” like a game interface, or some kind of 90s multimedia CD-ROM.

Interaction includes the indirect monetization, which is growing platform engagement by highlighting comments. A person watching a YouTube video on their TV is blissfully unaware that comments even exist.

Which is a way to steer the person posting the video into moderating and cultivating conversation, and that drives up views, as well as the transparent pursuit of money in the form of increased shopping links.

Lean in, lean back

While the living room has traditionally been a place for “lean back” experiences, we’ve learned through our user research that when a viewer is excited about the content, they like to multitask: they flow between leaning back to watch, and leaning in to enhance their experience. As a result, viewers want a richer, distraction-free TV experience that they feel in control of. With this in mind, our team sought to find a way to add greater engagement to the living room, while still striking the right balance between interactivity and immersion.

Translation: We had a mandate to increase engagement that could be monetized, so we have decided that we reached a compromise we can live with, because we have to.

To gather insights, we tapped into user feedback provided by participants who could interact with these three different approaches to watch different types of content directly on a TV with a remote.

What we learned from our users was:

  • The new design works for features that require equal or more attention than the video itself (e.g. comments, description, live chat) but obscuring the video would be detrimental to the viewing experience.
  • We need to continue to prioritize simplicity over the introduction of additional lightweight controls.
  • A one size fits all solution may not be the best approach, as features such as live chat and video description benefit from different levels of immersion.

This research allowed us to gauge the usability of each prototype and better understand if this overall new design aligned with our goal to enable a more interactive experience on TV.

I like how they stress that the most important feedback is feedback that aligns with the goal. I would be interested to hear how much feedback did not align with the goal.

Other companies have tried these kinds of approaches, of just cramming stuff around a video and making it smaller, since webTV. That the video can be encrusted with extra value.

Take the old Twitter Apple TV app launched in 2016 for example. That shrank the video down so a scrolling feed of related tweets could be shoved in there, like YouTube shoving in their toxic comments in their TV app.

The real success of doing anything involving social engagement, and the coveted online shopping, is from a second screen experience, not on the actual TV itself, because interacting with those kinds of experiences with a TV remote sucks ass and when you use those interfaces, you not only make your viewing experience worse, you degrade the experience of everyone else watching the TV.

Not to mention things like personal expression. One of the desired outcomes for YouTube’s system is to use it for sports, but why would chatting using the single account that’s logged in to YouTube, and your TV’s remote, be more desirable than discussing that live event on something like a real-time microblogging platform from a phone? This was part of Twitter’s problem with their TV app (along with other problems).

So a year after Twitter unveiled their tvOS app they pivoted to permitting a second screen experience. It wasn’t enough to make any of this desirable, but let’s keep thinking about how the premier place to trash talk live TV failed to capitalize on integrating live TV app into their live TV trash-talking app.

YouTube already has a second screen experience where you can link your phone to your TV and control what’s playing. You can read the cesspool comments, visit links in a description, and do your low-quality shopping without having to use a TV remote, or altering anything about your playback.

Except they have a bug that ignores the do-not-autoplay setting in both the mobile app and the TV app and autoplays at the conclusion of a video if you started it from your phone. Maybe they should work on that.

In the new TV app interface —they don’t talk about this in their blog post, but if you look at the video examples you’ll see— you can’t add a comment or reply. There’s a button to “Open the YouTube mobile app to add comments and replies”. This is identical to the current functionality, this just elevates that button to be side-by-side with the playing video.

You can “access” the text description, which you can already do in the YouTube app on your TV, but you can’t do anything with the description. In their example, the description is some filler text, but almost everyone knows that any value in a description comes from links. Either time code links, or links to a little thing called the World Wide Web. However, the existing TV app has no way to render those links as anything useful to someone watching this on their TV, and no mention is made of any improvements, so I assume it’s like the rest where I’ll be able to read the text URLs to myself but now side-by-side with the video.

So where’s that leave our killer interactive shopping? Well, according to Chris Welch at The Verge:

You’ll see a “products in this video” section appear whenever creators include what’s being featured in their content. But YouTube hasn’t quite reached the stage of letting you complete an entire transaction from your TV; instead, the app will display a QR code that you can scan to finish buying an item on your phone. Not exactly seamless.

Well that’s certainly not a passive viewing experience.

Perhaps, they will eventually add a button to open the shopping link from the YouTube mobile app, like they’re doing for comments. However, at that point, why are we bothering to put any of this cruft on the TV screen at all instead of just having one big button that opens the mobile app?

My guess is because it’s entirely optional as to whether or not you pick up the phone to do that, and maybe that’s what YouTube views as the real problem? The lack of any consumer desire to watching a rolling feed of spam and commerce like this is the new HSN or QVC. Just watch their demos in their blog post.

The whole thing repulses me.

There are commercial realities to the “creator economy” where sponsorships, ads, merch, affiliate links, etc. are all very important to funding the production the video people are idly enjoying, and we should acknowledge that. However, video that’s on equal footing with the commerce, and the “engagement” isn’t much of a video, the whole thing becomes undifferentiated noise.

2024-03-14 17:00:00

Category: text


Fixing Apple News

A little while ago I wrote about why Apple News, and Apple News+ suck. I’m very confident that I’m not in the minority with my opinion that it sucks. I would like to detail some things that I think would help to make the News app and News+ service more appealing.

Personalized Top Stories

Abandon the misguided concept that the Apple News team can present an immutable layout of general interest stories with a centrist, non-partisan viewpoint. If a news outlet has risen to the level of ire where it’s been blocked then omit it. Don’t have that gray box, with this condescending text:

Blocking isn’t supported in Top Stories and other groups curated by the Apple News editors. [Publication] is blocked in the rest of your feed.

That dialog shouldn’t exist in the interface. The curation of the editors is not sacrosanct over my value as a reader. There’s no editorial board that can be written to, or people working at Apple News that do any writing. This is simply a highfalutin aggregator!

The editors can weight stories that are of interest to surface in that region, but those weights should be overridden by any blocking, or content filtration, that a user wants to employ. Surface something else to fill the area, if that’s the main concern. It’s not like Fox News articles offer structural support or tie the whole interface together. This space is an inbox, not publication in and of itself.

Also allow the user to weight what’s important to them. The editors select for breaking national, and world news, with one little round rect for local news at the bottom of Top Stories. It is very rarely, if ever, a top story in local news.

L.A. County's new ballot processing center is in an old Fry's Electronics
This is surely, without a doubt in my mind, not the most important local news story in all of Los Angeles.

I can tap the “More local news” and I’m taken to a Los Angeles “topic” which aggregates any publication that has something to do with Los Angeles, that includes the LA Times, LAist, Hollywood Reporter, Variety, NBC4, ABC7, Infatuation, Eater Los Angeles, etc. However, those publications are not always writing about Los Angeles so you get state, national, world, general entertainment news, and celebrity gossip. It doesn’t present anything as coherent as the LA Times front page, or the LA Times app.

It does have its own menu for sections, like a newspaper, but it’s still in the purview of the Apple News editors so I can’t suppress the content mill firehose of The Infatuation. It gets the same blocked channel treatment as Top Stories so I get clumps of gray boxes. Again, suggest less doesn’t do anything.

The editors do a good job of mixing in sources that are not exclusively for Apple News+ subscribers in the Top Stories section, but that also means it might be from a lower-quality source, or one that I could access more easily from the web, or social media.

People might not remember this feature, but if you have a subscription for a publication outside of the Apple News app - like my subscription for the LA Times - I can authenticate my subscription and see all the LA Times stories in the app, but it doesn’t remove the “Apple News+” banner from the story headers to distinguish it from the stories I can’t read from Apple News+. Because I can read the LA Times, I would like to weight it higher than other lower-quality, free publications.

Filters

Instead of voting on individual stories with thumbs that don’t seem to mean a goddamn thing, or blocking an entire news outlet, what if we could filter by words, or phrases. You know, like in ye olden days? I can filter email, surely I can filter news, which we’ve already established is an inbox.

Apple could even jazz it up for 2024 with some ✨machine learning✨ to understand when we don’t want to hear about specific people who always seem to worm their way into the news. Not everyone wants to block specific people, and it hardly seems like it would topple any particular personal brand, but it could pacify some cantankerous people (like myself).

Throw some ML filters at stories that amount to little more than a collection of Amazon affiliate links. Let people who want to see deals-deals-deals see them, and maybe corral them into a specific section. Let the rest of us be blissfully unaware.

That also goes for filtering news stories that do little else than gnaw on a fragment of an interview, or bulk up one quote, into some big reaction.

Surely a network can be trained to identify what percentage of a story is out of context crap from the original source, because I can do it. It usually involves skipping anything from ScreenRant or Inverse! If anything, use those stories to weight the importance of the original news item, and then shove these bottom feeders under it, where those publications can look for their crumbs of attention.

None of this seems like impossible Jetsons technology. Artifact was doing stuff in this space with rewriting headlines. Jay Peters, writing last June for The Verge:

Clickbait headlines aren’t just annoying for Artifact users, though: they can also mess things up for Artifact’s recommendation systems. Sometimes, clickbait headlines can tell the systems that “you’re interested in things you may not actually be interested in because you’ve clicked to find out some key bit of information that was left out of the title,” Systrom says.

Systrom showed me a demo of the AI-driven rewriting process. When a user marks something as a “clickbait title” (you can find the option by long-pressing the article in your feed), they’ll see a little loading animation show up where the offending headline used to be, and then the new headline will appear. Next to the headline, there’s a little star indicating that it’s not the original title. Artifact isn’t able to rewrite headlines in articles themselves; it can only rewrite them from the feed.

Artifact failed because they couldn’t grow, and didn’t have a clear way to make money. Apple doesn’t have the same financial concerns Artifact had, obviously, but Apple would be concerned with profitability. Which is why the personalized filters could be for News+ subscribers. Some degree of server side quality filtration would be beneficial to even non-subscribers (as it would be very difficult to pitch someone to spend more if the base experience was as poor as it is right now).

Newsletters

Apple will send out email newsletter digests curated by the same people who curate your Top Stories. I’m sure someone turned that feature on on purpose, and didn’t turn it off.

People love newsletters! Apparently! They can catch up on stuff anywhere they can read their email, and they can organize it with the same tools they’re used to for their day to day life.

There are also people who have begrudingly picked up an assortment of newsletters over the last few years of the newsletter expansion who don’t relish reading their newsletters interspersed with their other email. I’m in this latter group.

I already use iCloud Hide My Email addresses to subscribe to newsletters for some layer of privacy, how nice would it be to have an iCloud email address that ingested newsletters into Apple News? Not to be interspersed with Stuff Magazine, and all that garbage, but in a newsletter section.

What if we did the exact same thing for RSS feeds?

We all have sites we value reading more than some of the detritus floating in Apple News. If Apple wants us to spend more time in the app put the stuff we value reading in the app. We’re not always on a quest to browse for just whatever happens to be there. We all already have our own sources and habits.

Create a home for those. In Safari’s Reader View, add a button to bring something into News, by finding the feed tag and importing it, or just that one-off item instead of leaving news items stranded in the browser. That publication might even already have that article in Apple News and then News could reduce the friction of getting to it.

Instead of someone feeling trapped under 100 pounds of sweltering JavaScript and boiled alive in autoplaying video ads, they could see a better reading experience, and those publications could see conversions to Apple News, and they could figure out if they wanted to be in Apple News+.

It’s very difficult to get people to want to open Apple News, which means it’s more difficult to convert them to News+ subscribers inside there. The publications in News+ aren’t especially interested in promoting News+ because they benefit more from your direct interaction on their site, or subscribed to their newsletter. They want to convert you into a subscriber for themselves. One of the ways that publications have had success with this is with gift links. Where a story can be shared with a filthy non-subscriber.

Apple News+ should have gift links. Instead of wincing when someone sends me a News link, I could be interested because it’s a gift link to read a story I otherwise would not. I could then experience the new and improved News app (seriously, Apple can’t skip the rest of what I’m saying and just slap gift links on the existing app, that won’t do anything).

It’s not rocket science. No one wants to share links to Apple News because the app sucks and it hijacks links and traffic. What if it the app was good and the link made you happy and grateful? What a comeback.

Better Layout

Flat out we need to get rid of the round rectangles that are too small to show an entire headline. That can’t be a thing. I don’t care if you need to do a brick layout, or shrink the image that accompanies most stories, but I’m not tapping on some mystery-meat headline that’s cut off with an ellipsis. We certainly don’t need towering images that take the entire screen up at the expense of clarity. If Apple wants to evoke print publications with those “spreads” in here, they should also evoke print publications in having the entire headline.

The News app, and News+, operate from the assumption that you’re just going to start at Top Stories and scroll down forever. When you drill into an article or publication, it’s not like macOS column view, or iPod navigation, where there’s some logical breadcrumb trail, because you might have picked a “section” from a menu.

The sections that are crammed vertically on top of each other are not ordered by you, or set based on your preferences. The “For You” section is stocked with stuff that’s never been for me. A parade of suggested topics and sections shuffle up and down and can only be dismissed outright.

Do I want to follow “Technology”? Uh… I mean… I guess that seems like something I would like to follow, but it seems very broad, and a lot of it seems poorly written? I can’t increase that section’s size, or decrease it, but I can leave the infinitely chopped up vertical to drill down into the topic, and be presented with the “For You” for each topic.

A screenshot of a bunch of crappy stories nominally about technology.
Sigh.

Which in “Technology” is still overly broad. Assuming I liked any of what I’m seeing here, I can’t rearrange the “Today” view to include more of it, and I can’t navigate to this topic on the “Today” view because it has no sections menu, like the publications and topics like “Local news” do. I can’t just see an outline of the whole shebang.

If you want to jump directly to anything that may or may not be on this page you have to go to the “Following” tab in the interface which shows you Special Coverage, Favorites, Local News, Channels & Topics, Suggested by Siri, and a grab bag of everything else that isn’t Sports.

What if I could collapse and expand sections? Maybe a mini-map of all the sections? Reorder and scale them like widgets on the home screen? You can reorder the publications under the Channels & Topics section but… it doesn’t do anything other than move them in this list, not your “Today” view.

Imagine if the order of the page changed based on time of day and I could choose to see important breaking news in the morning, topic-based writing during the day, and more thoughtful, ruminative pieces in the evening.

What if I could change the button for “Sports” and “Audio” to literally any other topic? Ivory, Tapbots’ Mastodon client, has an innovative feature to long-press on one of those bottom tab buttons, and swap it to something else so you can see mentions, faves, bookmarks, etc.

Eddy Cue could have like twice as many sports crammed into that “Sports” at the bottom as long as I could pick something else, like take me directly to the LA Times, or that baffling “Technology” topic.

Pushing users entirely to an uncontrollable, vertical scroll in “Today”, or making them dig in “Following” isn’t providing those readers with an experience that’s an improvement over other news sources. Despite the worst ad-tech out there, it’s often more likely I’m going to run into something I want to read on a site junked up with ad-tech, not in the difficult to peruse, or personalize abyss of News. It certainly doesn’t sell people on News+, despite the high percentage of the interface dedicated to teasing News+ articles.

Simply Fix Journalism

I know that I’m pitching changes with that “just do this” attitude that people love to be on the receiving end of, but someone’s gotta change something in that app. It doesn’t have to be any of the stuff I’ve said, but these areas I’m hitting on need some kind of work. There are financial ramifications to any kind of change, and the folks in charge of Apple News and Apple News+ might value getting new publications onboard more than improving the app. However, improving the app improves all the good publications, and shows people that they’re worth paying for.

Instead of chasing old ambitions around editors and institutions, I would like to see an approach that takes the reader into account.

2024-02-26 14:15:00

Category: text


Joshing Ya

Joshua trees at dusk.

One feature of the Vision Pro that reviewers and buyers alike seem to really appreciate is the ability to dial in an environment. Casey Liss joked on Mastodon:

How long until a YouTuber goes to Haleakala, Joshua Tree, Yosemite, and Mount Hood, and finds the exact points the Vision Pro environments were captured?

Bonus points for the moon, obviously.

I’m sure someone is going to do it (definitely not the moon part), but the environments aren’t purely photographic things. That’s not a judgement just that if you got as close as you could to what was depicted you’d still never get it exact, so don’t sweat it.

It does make me think about how much I love the places on the environments list that I have been to. When I did my demo I didn’t really feel like I was in those environments. That’s only because I’m viewing that environment in the context of my own experience, and faulty memories. I’ve been to Haleakalā, Joshua Tree, and Yosemite, but I didn’t feel as direct a connection to those places as I would have liked. The exception being Joshua Tree, which was the closest to matching my feelings about the place even if I didn’t feel like I ever stood in that exact spot.

It’s like looking at a good publicity photo of places you’ve vacationed as opposed to a personal photo you took which have attached meaning, but aren’t as cleanly executed or specific.

Maybe we should all go to Joshua Tree? I’ve been a lot, but you’re probably some East Coast bum, like Casey, and haven’t really thought about going. Let me give you some travel tips.

Getting to Joshua Tree

The best months to go to Joshua Tree are between October and May. The summers are simply too hot. It can get very, very cold, especially at night, in the winter, but it’s much more comfortable. It can, occasionally, snow, but you really have to race to catch a glimpse because it won’t stick.

Snow! December 2008

Joshua Tree National Park, and the adjacent Yucca Valley, sit above Palm Springs and the rest of the Coachella Valley. You can fly in to Palm Springs, and rent a car for the short drive up to Joshua Tree, you can travel from Ontario (California, not Canada) International Airport, or one of the further Los Angeles area airports, and sit in a snarl of traffic for hours.

Even if you’re local enough that you’ll want to drive to Joshua Tree, you’re better off not going there and back in the same day. Definitely plan to spend the night in Yucca Valley. There’s no great, big lodge inside of the National Park, like some of the others, but you can camp, if you’re the weird type of person who wants to camp in a desert and use chemical toilets.

A little wooden building with a sign that says Pioneertown Motel.
Pioneertown Motel check-in. January 2023.

Pioneertown Motel is a great place to crash for those funky desert vibes, and it even has Joshua trees on the property. It’s great for stargazing at night because it’s beyond the light pollution (and pollution-pollution) of the LA Metro area.

Night time. A Joshua tree is in the center of frame, with a low motel dotted with orange light sitting in the distance. The sky is dotted with stars and a faint orange glow on the horizon.
Pioneertown Motel at night. January 2022.

Pioneertown itself is a collection of old west facades from when the area was used to make Westerns. They have stores selling various tchotchkes, but it’s not a grandiose theme park.

For food you have The Red Dog Saloon, and Pappy and Harriet’s. Both are in Pioneertown itself, and easy walks. Taking the road back to 29 Palms Highway lands you at Frontier Cafe which is a good (but potentially busy!) spot to get coffee, breakfast, and sandwiches to take with you to the park, or just stroll around the shops at the highway intersection.

The Park

Arrive at Joshua Tree National Park early and use your National Parks pass to enter. There are three entrances, and only the most popular West Entrance on Quail Springs Road has an entry booth. They usually have one lane set aside for passholders, but when the non-passholder lane backs it up it backs up down the road that leads to the park and you’re just stuck.

The other two park entrances don’t have booths monitoring entry because they’re far less popular, but you must display your pass or pay for entry. Sometimes we’ll start at Cottonwood Springs Road off the 10 and wind through to Quail Springs Road, or vice versa. Usually we’ll just go in and out of the Quail Springs Road entrance since it’s the closest to where we’d be staying, and other services. Those other entrances do have visitor centers.

Cellphone service in the park is almost non-existant except for the area by the West Entrance, and the Quail Springs picnic area, so get your offline maps downloaded.

Hikes

My boyfriend’s favorite (and punishing) hike is Ryan Mountain Trail. You are completely exposed to the elements so remember your sunscreen, and windbreakers in the winter. The view from the top is fantastic, but it’ll really wipe you out to get up there. Unless you’re an avid hiker you’ll probably be done for the day after that. This is also a popular trail so you’ll be stopping to let people pass, or listening to assholes lugging bluetooth speakers on the trail.

A view down into the Park from a rocky, desert peak, looking at the scabrous valley floor below. The sun beats down on it all.
The peak of Ryan Mountain looking back towards the west. You can't even see the Joshua trees from this height, they're just dots. January 2023.
A view down into the Park from a rocky, desert peak, looking at the scabrous valley floor below. The sun beats down on it all. A Joshua tree is in the foreground to the left.
Panorama from the most of the way to the peak of Ryan Mountain. November 2020.

My favorite stuff is all the stuff that’s not really a hike. Which is a cop out, I know, but it’s nice to just experience the place without hitting cardio goals. That includes the Cap Rock Nature Trail where you meander around a winding trail that has little placards explaining what various plants are, and there are huge boulders up around you.

The Cap Rock Nature Trail.

The Baker Dam Nature Trail is similar, if a little larger, and includes an old, not-that-impressive dam.

Minerva Hoyt Trail (but mostly just its parking lot) is probably the largest expanse of Joshua trees you’re going to see. You’re just surrounded by the towering things.

Joshua trees at sunset with large boulders behind.
Minerva Hoyt parking lot. January 2020.

Keys View is far out of the way, and really only offers a vantage point looking down at the Coachella Valley and San Jacinto Mountains. That’s not a bad thing, it’s just a long drive and then you’ll get back in the car and drive back the way you came, so you might not be satisfied if you have limited time.

Another quiet spot for a snack, or just a rest, is where Live Oak Road comes off of Park Boulevard (Quail Springs Road turns into Park Boulevard inside the park). There’s nothing impressive here but it’s quiet because most people don’t have a reason to stop here. You might also see some ground squirrels.

My favorite spot in the whole park is the Cholla Cactus Garden. It’s closer to the Cottonwood Springs entrance than the West Entrance so it’s quite a drive, but worth it, especially in the late afternoon when the sun starts to get low, or just dip, behind the mountains. This spot in the park is covered with cholla cactus (and bees, a lot of bees). Which gives you a very different vibe from the towering Joshua trees. I’d love it if Apple captured the Cholla Cactus Garden as an environment for the Vision Pro, but only if you could walk around it’s little path.

Cholla cactus fill the lower half of frame and spread toward the hoorizon.
Cholla Cactus Garden. November 2020.

Maybe let us change viewpoints along the path? Like Myst, or the real estate web site version of Myst: Matterport. Anyway, walking through something more human-scale is part of the experience.

Hike. Die. Repeat.

You can spend a few days in the Joshua Tree going in and out of the park, and the area vibing to the funky desert. Have a spiritual awakening, or whatever, I don’t care. When you finish you can pop down to Palm Springs for tiki cocktails and a swim before you head home.

I’m not sure it will increase the connection you have when you look at the Joshua Tree environment in visionOS, but it’ll certainly give you a more three dimensional view of the place.

2024-02-22 10:15:00

Category: text


My Apple Vision Pro Demo Experience

When I was running errands I stopped into an Apple Store and did the Vision Pro demo. I doubt my impressions of a product released weeks ago, and thoroughly reviewed by experienced professionals, are of any interest but I provide them free of charge for those that have nothing better to read.

As per usual, the flow in the store is really unintuitive. I arrived with my QR code and the greeter sent me to Angela. Angela scanned the QR code and then told me to wait by the MacBook Pros along the wall. That immediate wave of anxiety about being put in an invisible DMV line overcame me, but my wait was really brief. Eric was there to do the demo with me, but Angela was around supporting Eric. Seemed like he was being trained on the process.

Eric handed me an iPhone to do a face scan, and for some reason it didn’t want to read my face when I turned my head to the left unless I moved the iPhone. I’m not sure but I think the backlit store display behind my head might have caused a problem, because every other direction worked both times I did the scan. It did not instill confidence.

Before my appointment I had selected that I use eyeglasses in the app, and so I presented them to be scanned. My prescription is not especially strong, and I can see fine without my glasses but experience eyestrain over time, so I might as well make sure things are as sharp as sharp can be for the demo. The machine that scanned my eyes produced a QR code. Angela was standing to the side during this, and mentioned that it was so Apple didn’t save or know my prescription. Which I guess is comforting, but seems kind of surface-level privacy if I buy prescription inserts? Honestly, I’m just in awe of the QR codes to get more QR codes to get more QR codes.

Then back to the stools where we sat. Eric made idle chit chat about what made me want to do the demo, then the device was brought out on a tray with the inserts. The insert pairing didn’t happen until after calibration, which I thought was weird, but whatever.

Part of the reason I didn’t want to do this was because the thought of a shared face-product creeped me out, but the device appeared to be clean, and not like one of those abused demo products tethered to a display at a Best Buy where the only concern is theft. I’ll update this if I get pink eye.

Eric told me, and then demonstrated, how to pick up the device with my thumb under the nose bridge, and then four fingers on the top and then you flip it. Which… this is just too precious a gesture for a device with this heft, but when I went to pick it up a finger brushed the light seal too hard and it popped off so I guess those magnets really are as weak as everyone says. We can’t have clips? Tabs? A little space-age velcro? It has to be weaker than those flat, business-card fridge magnets?

I’m not convinced that the face scan picked the right light seal for me (felt like all the weight was on my brow and the nose and cheeks keep having light poke through if I raised or furrowed my eyebrow). I tried moving it up and down on my face and tightening the head strap knob. It was never comfortable, but also never felt rock solid on my face. When It was higher on my face things seemed to vibrate by how insecure the lower portion was, if it was too low on my face it was impossible to see clearly.

The inserts needed to be paired, and there was the calibration stuff, and that was all fine. I had expected the passthrough to be dim, compared to the brightness of the inside of an Apple Store, and it sure was. I didn’t see any video artifacts, but it felt like I had sunglasses on that were tamping down the luminance.

What I thought I was prepared for, and I wasn’t, was the intense tunnel effect. In the video reviews I watched attempts were made to create an approximation of the binocular-like effect of having ones field of view constrained, but in my opinion the simulated views have been too generous. What really added to it was the chromatic aberration and how strong the sharpness fell off from the center. How much of that falloff was from the foveated rendering, and how much was softness from my own vision, or these particular Zeiss inserts, I can’t say. It just wasn’t sharp outside of this cone in the center of my vision.

This was most notable when we got to the part of the demo where Eric had me open Safari and he relayed the scripted response about how sharp and readable the text was, and how it could be used for work … and I did not share these feelings. The text I was looking directly at was clear enough, but three lines up, or down was fuzzy, and likewise side-to-side. I never felt like everything was blurry, but definitely made my view feel smaller than if I was looking at an equally large real-world billboard of text.

The chromatic aberration was very distracting when watching media, but I can’t say how much of that is my sensitivity to such visual phenomena, or if maybe I would get used to any of it.

The Spatial Videos also bothered me, but I know they’ve had rave reviews from almost everyone. I see the shimmer and sizzle in stuff that doesn’t appear to be matching, and it’s not “ghostly” like I’ve seen some describe. That video where the kids are crouched in the grass and we see the underside of the kid’s sneakers had a glow on the sole of the shoes and the grass. According to Eric that was shot with a Vision Pro, which means matching cameras, so I have no good explanation for the artifacts I saw, just that my eye was drawn there. Without applying a rigorous round of tests, I don’t know if the artifacts I felt like I was seeing were in the recorded content, or maybe something else. My first step would be to compare left and right eye views, but alternating blinks weren’t enough in the demo.

Similarly the video shot on the iPhone 15 had a lot of the expected shimmer in bits and pieces throughout. I know that this is unobjectionable to every other human that’s looked through these lenses, so I’m just cursed to notice it.

Mount Hood was impressive, but that ever-present green dot for the Control Center really ruins the mood. Also unlike the static panoramas, everything that’s moving has an air of artificiality to it where it seems a little too perfect. Better in appearance than a video game, but the kind of things that underly a video game environment with looping stuff. I wish when I moved my head around that there was something closer to me that would parallax so it felt less like a nodal camera move. No matter how much I loved my head it felt like everything was on a dome and all I was doing was rotating the center of the dome, never moving from the infinite point of the center.

The Super Mario Bros. movie clip in stereo wasn’t a good showcase of stereo or of the cinema viewing experience. It felt like a very large TV 5 feet away from me. I didn’t feel like I was in a theater, and nothing in the clip wowed me. I’m not sure why this particular movie was selected for the demos, as it doesn’t really shine.

My demo did not include the dinosaur. I don’t know why, and didn’t want to derail the scripted experience by asking. Maybe the dinosaur was down for maintenance?

The sizzle reel of immersive content was as impressive as everyone has said —in particular the sharks. I wonder how realistic it is to invest in shooting material like this just for these headsets, but that’s not my problem, and hopefully not a deterrent to Apple recording more of this.

Eric wrapped the demo and started to tell me how to take it off, but I asked if it would be OK to see some of the other environments beforehand. He graciously let me (no one was waiting for a demo Wednesday morning and we’re well beyond the crush of launch day) so I pulled up Haleakalā, Joshua Tree, and Yosemite.

Haleakalā is visually very impressive, but interestingly didn’t make me feel like I felt when I was at Haleakalā many years ago. I had a similar experience with the Yosemite one, but that might have been more seasonal. There was snow in parts of Yosemite on my trip to it, but it wasn’t snow-covered like this. I just didn’t associate it at all with my memory of the place.

Joshua Tree was the most evocative of the real place, and I’ve been there many times, but I couldn’t really get my bearings. It felt like somewhere in the park, for sure, but one of the things I always think about is the road cutting through the park that’s always very visible (joshua trees do not have lush, view-blocking canopies). It rang the most true so if I were to pick one to use a minimalist markdown editor in, it would be Joshua Tree.

I took the Vision Pro off like how I had been instructed to put it on, and Eric asked if I was considering buying one or if I was just looking. He didn’t do it in a car salesman kind of way, but the way in which he needed to know if he had to have someone fetch one from the back. I declined, and he offered me the QR code that had the configuration from my demo in case I ever wanted to come back and do another demo, or buy a Vision Pro later. I accepted this final exchange in the QR code ballet and thanked him once more before I headed out.

It’s a very interesting product, and I’m glad that I experienced it first-hand instead of just speculating endlessly about why it wasn’t for me. I don’t envision myself ever purchasing this iteration of the product, but I don’t think anyone’s a fool for buying one if they have the disposable income. Perhaps if Apple adds just one more step in the Apple Store process that uses a QR code I’ll reconsider.

2024-02-21 14:15:00

Category: text


No Simple Answers In Stereo

There was some continued back and forth on Mastodon about stereo conversions. Mac Stories contributor Jonathan Reed asked a couple questions:

What’s your view on the best converted movies? Do you think they hold up just as well vs (non-bad) native 3D movies?

I am not picking on Jonathan, but crediting him with a question that seems very reasonable. It seems logical to ask for an example of what’s working, but that’s much more difficult to do than it sounds. It’s kind of like proving a negative (even though this is a positive?)

If there is a best conversion, you’re unlikely to be aware of it at all, because the audience usually only remembers technical errors, or discomfort. There’s nothing outwardly impressive about a good conversion, or good native stereo, and anything that was held up as a good conversion would be picked apart with intense scrutiny to prove that it’s not actually good.

Add on top of that the point Todd Vaziri and I were trying to make in that thread, and in our feedback to the Accidental Tech Podcast, that there are a variety of methods employed in various shots in various movies. It is not as homogeneous as it appears to be in untrustworthy marketing, or that silly “real vs. fake” site.

There’s no binary bit on the movie that flips if one shot in a native 3D movie is a post conversion shot, what percentage of shots need to be rendered fully 3D in an animated feature, or a blockbuster with 2,000+ effects heavy shots.

I know that is deeply unsatisfying as an answer, and the follow up question would be for movies that don’t work well. For professional reasons I wouldn’t ever spell that out.

Ultimately, I know that just saying that it’s nuanced and complicated is not very helpful or informative to people that want to understand stereo. For those that want a quick answer on whether a movie is worth watching in 3D on their Apple Vision Pro, there’s nothing so simple as a list.

The best I can do is talk through common problems in stereo. To do that we need to talk about some terms. To do that I’m going to need to bore the fuck out of you.

Native Stereo Photography

This is shot with two cameras. Usually this involves a cumbersome rig where the cameras have to be exactly aligned, have matching apertures, matching focal distances, and are slightly physically offset. To get them close enough together they’re often arranged with one camera pointing straight up, and it gets it’s light from a beam splitter. A semi-transparent plane of glass that lets light pass through to the main camera, but also reflects down to the vertical camera. This is an enormous pain in the ass, and it’s very easy to have something be just a little off in a way that won’t be clear until later.

When the stereographer and director finish shooting, they can adjust the convergence by horizontally transforming the photography which pushes and pulls things in order out of the screen depending on where the left and right eye converge. However, they can’t adjust the interaxial without throwing away one of the eyes and doing it over with conversion. That means they might be more conservative in all of their choices to reduce the chance that there’s an error.

Things to look for are misalignment. If the left and right eyes have an angular difference between them, or skew slightly. Your eyes are looking for horizontal disparity so vertical shifts mess it up a little. This is abundant in iPhone 15 Pro Spatial Videos because of Apple’s attempts to compensate for the mismatched lenses.

Another big thing is color shifts from the beam splitter. Sometimes that could manifest as a constant shift, or it could be transient if the camera rig is moving and the light catches differently. It’s possible to color correct the views to get a closer match but uncorrected differences might appear to shimmer when your brain processes the slightly different hues and values.

Specular reflections. Think of bright pings of light on glass or chrome, often from a distant, but bright light source. One eye might get the ping of light and the other eye doesn’t. A mismatch like that can appear to glow, or shimmer, and could be uncomfortable to look at. To correct for this in native stereo the ping might be artificially copied and offset to the other eye, or the value of the ping might be knocked down so it doesn’t draw attention.

If you have a visual effects shot where native stereo plate photography is combined with rendered assets you might see issues that you wouldn’t get if it was a post conversion. Like a bluescreen or greenscreen shot where the work done to extract the photographic element from the screen color where the extraction is not an exact match. A common issue is flyaway hairs, those thin wisps of hair that are always difficult, could be in one eye but not the other, or trimmed in an odd way.

Flyaway hairs in non-VFX native stereo shots should always look pretty good, but depending on how deep the background is behind them you might be surprised to notice them more than you would in a 2D movie.

This doubling of work - and the need for it to match - is also what makes something like wire removal paint much more difficult. It’s easy to make each single view of paint be internally consistent and work, but then to make sure those paint adjustments match between both photographic plates is a pain that you don’t have to deal with in conversion.

It used to be very difficult to get 3D matchmove solves that were rock solid for both eyes. Meaning something could appear to float away where native stereo photography and CG met. Very rare, but maybe if you’re watching something old things might seem to drift or breathe.

Another thing 2D VFX artists take for granted is being able to use masks/mattes/rotosplines - and only having to do it once without thinking about where the matte sits in depth. The matte could be used to grade the background, or it could be to help extract a person. Those rotosplines need to be done for two eyes, and they need to match the plate photography, and their companion spline, including motionblur. A soft mask extending back along an angled surface will need to have depth that matches that angle in the other eye. So you end up doing the post conversion kinds of steps on the mattes applied to your native stereo left and right images to make them match and sit in depth, but are constrained to the native stereo plates as well.

Native Stereo Renders

Native stereo renders in animated movies, or for shots in a VFX heavy movie that don’t have photography, have their own pros and cons. Even those “all CG” shots are not always fully rendered in stereo for left and right eyes. The flat version of the movie will be done while the stereo version of the movie lags behind a little bit. That means that rendering the offset eye might reveal issues where an old version of a shader was used, or an asset changed since the original shot completed. It can be much more of a puzzle.

People also can do anything they want to with their cameras because they are no longer constrained by physics. That means you either get mind-bending stuff, like stuff sticking out of the screen that would really be a considerable distance away from the audience, requiring enormous interaxial camera offsets, or sometimes they’ll just make it really flat, even though they have the ability to do whatever they want.

You also still have some of the same issues presented by bright specular pings being in one eye, but not the other, but also that they might sizzle because bright, distant light sources need more raytracing samples (tiny thing far away gets more missed rays than hits).

You might be like, so what? Just turn up the samples, right? That’s easier said than done in some cases, especially if the 2D version of the movie is done already, or the rendering engine just can’t resolve some very bright, distant point of light with enough samples that won’t take 3 months to render. The sample noise will sizzle differently between the two eyes and appear to glow. There are ways to cut out pixels from the other eye, or median filter it, or what have you, but if it’s uncorrected you’ll see sizzling pixels.

Native stereo renders do have one fun trick and that’s the depth map (Z) channel that is normally used for depth of field focus effects. It is a image where every pixel corresponds to how far away something is from camera. It can be used to create an exact offset based on the stereo camera pair. This makes it kind of like post conversion where fake depth is used to offset 2D data from one camera view to the other. That means you can offset things like rotosplines, or other 2D elements, to match the depth of your 3D exactly. I do mean, exactly, since it will be at exactly the depth from the depth channel. Effectively like using a projector from the location of your left eye camera, and then viewing it from the location of your right eye camera.

This also means that parts of the render from the left or the right eye can be offset by the depth data to patch or supplement renders from the other eye. Think of it like sneaking in a little conversion. This can save render time, and help with various problems matching the eyes.

It can be as specific as using a render for parts of a character (eyes, fur, screen-right edges), or parts of lighting components (just the specular, just the reflection, refraction, or just the diffuse).

To a purist, it might sound like an anathema to mix and match, because a purist would assume that the highest quality is from matching renders. Really most people would fail a Pepsi challenge on fully rendered shots vs. hybridized shots. The philosophical concerns don’t matter as much as the final set of images being coherent.

For this reason it is absolute bunk to call all animated movies “real” 3D, or to be able to claim from your seat in an audience what’s rendered from scratch and what’s not.

Post Conversion

Conversions are popular because they require less time on set, use more flexible camera setups, and cause fewer problems for the crew that’s mainly concerned with the 2D version of the movie. That also means they can adjust the depth of everything ad infinitum. That can mean a more creative, and thoughtful use of stereo because they can evaluate the results and change it in a way they can’t do easily on set, where they are more likely to be conservative, or stuck with what they shot.

Conversions are also associated with people looking to make a quick buck on ticket sales, and reducing labor costs on the conversion to get as much profit as possible.

That means it’s likely you’ll see the places where conversions fall apart because of time and budget constraints, which was very common in earlier post conversions when studio execs felt like they needed to rush. You might recall movies where only part of the film was in stereo, and they wanted you to take on or off your glasses in the theater.

There was also the quality issue from the assumption that people were going to watch these in theaters were they couldn’t hit pause. The home video part never panned out - but maybe it will with products like the Apple Vision Pro.

Major issues stem from the approach a conversion house takes when presented with 2D footage. Most places will create a 3D space in the computer and camera in order to “accurately” produce the offset eye. I’ve heard of some places where people just cut out and move stuff around to wherever it feels right for them, or use image based algorithms to create a fake depth map to drive the stereo offset, but the map might have holes and errors where the algorithm guessed wrong. I’ve never worked at a place that did these things so I don’t have insight to share about their thought process, so let’s move on to placed-in-3D stuff.

To do that, matchmove needs to be done where the camera is solved for, and elements in the shot get rough geometry. Since this often needs to happen for the 2D VFX portion of a movie, this is considered some synergistic cost savings. The plate can be projected on that rough geometry in a 3D software package, and then an offset camera can be dialed for the interaxial and convergence values that feel right for the shot, and in the context of the sequence. That projection on to the hard geo is really just to dial things, the geo won’t be used raw, it’ll be cut by rotos, blurred in the depth channel, etc. to make something that’s softer than the hard facets of rough geo.

The photography does get rotoed, however only one eye needs to be rotoed, not a complicated matching pair. The photography also needs some degree of paint work to be done to it to clean up the area that was occluded by the foreground. This can be as simple as painting out a sliver, or halo, around where a character occludes the background, or it can be a more extensive affair.

That means the same paint needs to be used in both eyes to account for any minor variance between the paint and the original plate. While the audience could never tell that the background was painted (really, I absolutely promise you can’t tell because shit is painted all the time in regular-old-vanilla shots and people truly don’t have an inkling) the audience can tell if there’s only paint in one eye’s view for the same reason as it would be a problem to have mismatched paint in native stereo.

Paint removal includes things like flyaway hairs which will be painted out and rotoed, or luminance keyed, to bring them back. They will match exactly between the two eyes, unlike mismatched keys of native stereo, but they will need to be placed in depth.

If you want to tell anything about the quality of a conversion, look for those flyaway hairs. They should be there, and they should also be at a sensible depth relative to the rest of the hair. not way behind, or in front of the actor.

The actor should have internal depth, which is usually derived from the rough matchmove geometry. They should have a nose past their eyes, and their ears and neck should be back. They should never feel like a cardboard cut-out unless they are far away from the camera, like background actors, or a really wide shot of them in an environment.

Speaking of environments, the two biggest problems there are highly reflective and refractive surfaces. If there’s a shop window, with reflections, and the name of the shop painted on the glass, the reflections should not be at the depth we see through the window or they will look like they’re at the depth of the walls and surfaces inside the shop. They need to be at the depth of whatever they’re reflecting. That means the reflections must be painted out, along with the lettering on the glass. The lettering needs to then be placed over the shop interior at the depth of where the glass plane is on the facade, and then the reflections need to be added (reflections are additive, but that is a rant for another day). Then that reconstructed window is used for both the left and right eye. No, you won’t be able to tell that work was done because you, in the audience, don’t have the 2D version of the movie to look at and compare it to, and the work will all be so internally consistent that it wouldn’t register for you to check without the knowledge that this kind of work needed to happen. In the abstract this knowledge might cause philosophical conflict — unclean! Impure! But I assure you the director isn’t anywhere near as precious about this as you might think.

If a conversion house omits that level of work, and just lets the shop window be flat to the depth of the building facade, or lets everything in the window go deeper, including the reflections and lettering, then it’s going to look wrong to any casual viewer.

This can also be applied to things like shiny cars, or reflective bodies of water.

As for refraction, that will be most obvious with things like thick, curved glass, and glassware filled with liquids. Bottles, wine glasses, thick reading glasses, etc. The edges of the glass, where the index of refraction creates that defined shape that’s almost solid, should be at the depth of the glass in 3D space. The interior core of the glass, where you see through bent light of the objects behind it should be closer to the depth of the object behind it (accounting for any magnification). Then there should be an artful blend from that edge depth to that core depth in whatever fake depth channel is being used. Anything like reflections should be painted out and added on top; like the shop window.

What you do not want is for the glass object to feel like everything inside of it is at the depth of the glass surface. It will look painted on, not like you’re seeing through the glass.

This also goes for lens flares, which are reflections and refractions from light hitting the lens element at certain angles and then the filmback. The lens flare needs to be painted out and reconstructed exactly, then the source of the flare needs to be offset to match the location of the light source, and then the little bits of the lens flare need to be offset based on where the light source moved to in relation to the center of frame, which would be the center of the lens. Oftentimes a lens flare plugin in a compositing package will be used to help replicate the original flare, or at least used as a guide for placement.

This leaves other camera based effects like grain, and heavy vignetting. The entire plate needs to be degrained as step 0 in this process, and then regrained, taking into account any extra reconstruction work, and also offseting the grain timing for the offset eye. You should never have a stereo offset in your grain (meaning the same pattern reproduced and moved in X) because that puts the grain in depth. If you leave the original grain on your left and right eyes, and do your offsets, then your grain will be painted on to the depth of all the surfaces you reconstructed. That’s extremely bad, and extremely obvious when it happens.

Grain should be offset in time (effectively randomized noise seed) so there is never a matching pattern your brain will try to place in depth. The result is a fuzz that exists around screen depth. Your eye doesn’t identify it really having any depth unless the grain is heavy and can almost take on the quality of an atmosphere, that flattens things, in which case the decision maybe made to reduce grain for both left and right eyes.

Usually people can get away without treating vignetting, unless it’s heavy —the real artsy stuff. Then the conversion house needs to remove the original vignetting and add it at screen depth (no offset) with everything else in stereo being placed behind it. You don’t want something popping out through vignetting —that doesn’t make any sense.

The really good news is that because this uses the same greenscreen and bluescreen, the edges don’t get screwy, and any combination with CG can be made exactly, because everything can be placed together in the same shared space. When done well it help the director shoot how they’re comfortable shooting, and get the results they want for both 2D and 3D.

Hybrid

Really it doesn’t do any good to make any sweeping statements about quality based on method, and especially after taking into account that films will blend various parts in ways that are often invisible to you.

Idealogical purity really doesn’t exist in either the realm of home video or stereo video, so try not to get too wound up about it. Always try to watch the best version of something you can, and suits your current situation, but don’t get yourself upset about something in the abstract.

If you really want to understand the quality of the 3D work inspect those common problem spots I mentioned. Pause your movie and open and close your left and right eyes. Look at the refraction, the reflections, the flyaway hairs.

Separately, judge whether it was worth seeing it in 3D at all. Did that add to your experience for this particular film? Was anything about it essential, or memorable? People talk about the 3D of the Avatar movies because James Cameron made it a part of the experience, not just because the native stereo checkbox was ticked.

No one is under any obligation to like 3D movies whatsoever, but it’s important that we don’t justify or define that dislike based on a simple binary that isn’t true.

2024-02-19 17:50:00

Category: text