Unauthoritative Pronouncements

Subscribe
About

I Hope There’s More to Snapshot Than This

9to5 Mac noticed that Apple had an Apple Snapshots page up. There’s no announcement, or statement about what’s it’s supposed to be, or who it is for, so it’s unclear if there’s supposed to be more to this:

https://snapshot.apple.com/

It’s puzzling why it would appear for public consumption in the state that it’s in. It’s already being indexed by search engines, so if they didn’t mean to release it, they might want to change their robots.txt file.

Snap Judgement

First of all, it’s a grid of pre-populated celebrities that slowly translates right to left. You can’t scroll it faster, filter, or reshuffle it. More importantly, there’s no search feature at all. So I hope you like Sabrina Carpenter and Austin Butler, because they’re the most prominently displayed celebs. Maybe Apple will reshuffle these every Tuesday like they reshuffle the TV app Home tab.

The snapshots themselves are very abbreviated artist bios that provide virtually no information about the celebrities as people. You’re better off using literally any other method to look up information on these people. Apple doesn’t even link out to Wikipedia or IMdB. Why would they? You wanted to know where Austin Butler was born and he was born in “US”. Asked and answered!

All the media that’s collected on the snapshot pages is Apple-centric. It’s a funnel to the TV app, Apple Music app, and the Podcasts app. That funnel has very limited utility to people who are not Apple executives.

For instance: Austin Butler has a movie coming soon called Eddington. A title card is displayed for it, and “Coming 17 July”. Tapping it takes you to the TV app where you can watch the trailer. That’s it. There’s no way to go from this to advanced ticket sales, or the movie’s web site, or any social media about the movie. Just that one trailer. That’s all Apple offers.

If you wait more than 30 seconds to get to Cate Blanchett, you can see that her “Newest Release” is promoted — Black Bag. For some reason it took three taps, but I eventually got the Black Bag page to load in the TV app to buy or rent it from Apple.

The Movies & TV shows are not sorted chronologically, and there’s no way to filter or force them to be, either here in the web app or in the TV app.

The same goes for Podcasts, which is even worse because it also doesn’t include podcasts that I know the celebrity was on. Cate Blanchett, for example, did a whole slew of podcasts to promote Borderlands and Black Bag but you wouldn’t know it from what’s listed on her snapshot. Searching the Podcasts app will turn up some of those more recent podcasts, but it doesn’t do it chronologically either. I can’t tell if its weighting some of the results by the popularity of the podcast, or the number of downloads of an episode perhaps?

Are they doing some kind of processing to determine whether or not a celebrity has appeared in a podcast, or merely someone is discussing the celebrity? Tagging? It would be too clumsy to just match against the text of a celebrity’s name.

In any case, Las Culturistas had Cate Blanchett on for a March 19, 2025 episode titled “Huge Fornicators” and it’s not there but episodes of other podcasts from 2020, 2023, and two from 2022. Tapping “More on Apple Podcasts” takes you to a confusing page topped by shows that have had her on, and then a bunch of those stale episodes in the episodes section below.

Curiously, a podcast that you will see come up a lot in the Podcasts section of Snapshot is WTF with Marc Maron, but Apple has no Snapshot of Marc Maron. Is that just because there are so few snapshot pages in total, or because he’s an interviewer of celebrities, but is not worthy of being a celebrity?

Where’s News+ in all of this? I don’t like News+, but I’m pretty sure some people have written about these celebrities at some point. That has a direct connection to an Apple Service, but it’s absent. Wouldn’t it be a good way to surface content locked inside of the magazines?

Beyond Apple, there’s no way to incorporate useful content that isn’t aligned with Apple’s commercial interests. Many of these celebrities have documentaries, or movies, on Netflix. They don’t get mentioned because they’re not aligned with the TV app. Then the question becomes: is the value of Snapshots to users complete information, or is the value of Snapshots merely for the benefit of Apple?

Snap to It

This is all reeks of the half-baked delusions of marketing execs, and the ensuing web app demo that was whipped up to please them. I don’t fault the developers that did the work as much as I fault the vision for the product.

The vision I would like to see, and hopefully what this evolves into, is a more fully featured component that can be used inside of apps. Like if I’m listening to a podcast in the Podcasts app that has Cate Blanchett on it, maybe I can tap through on Cate Blanchett’s name to see a snapshot. If she’s promoting a new movie in theaters I can watch the trailer or tap through to purchase advanced tickets from a ticketing app (like a partnership with Fandango, or better yet, apps can register that they sell tickets and can all be displayed).

That would also be helpful for music artists who maybe want you to buy tickets to their performances. Maybe the music artists have web sites that have additional information and merchandise.

We should also be able to see if the featured celebrity is also in other current podcasts as part of this press tour, meaning that it shows me chronologically what she’s in, not just old episodes where she’s discussing a different movie.

However, having said all of that, it’s still never going to fill the niche of celebrities and artists communicating to their fans that social networks fill. Why would anyone look at a celebrity snapshot when they can follow the celebrity —or general pop culture accounts— on social media?

Did I mention that the URLs for this incredibly unfriendly for celebrities to use elsewhere? Every person is a number at the end of a URL. It’s not their name, or a user name, or anything human. Taylor Swift is person/6667119979 which she is totally going to plug the next time she makes a public appearance.

One Ping Only

Apple half-assed their social network, Ping, which was supposed to be a way to keep tabs on celebrities. Then it tried it again with music artist updates in Apple Music —that also died a quick death.

Apple just never wants to do the work to make the platform for common folk. It seems that the thinking is that if they have Lady Gaga posting then people will visit whatever liminal space Apple creates to host that. They also don’t really have a good way to demarcate who is and is not a celebrity. Celebrity is seemingly a person involved with a commercial endeavor on a platform where Apple financially benefits.

There are a lot of celebrities on YouTube, TikTok, and other fandom groups that have small, but intense followings. Apple has no financial stake in these things. Hot Ones, the show with hot questions and even hotter wings, is going to do more for celebrities than Snapshot ever will. Same goes for Chicken Shop Date, etc.

Apple should clearly delineate how all this works. How people can register and control their online image represented and hosted on Apple’s servers. What possible value could it have to known celebrities or relative unknowns?

Barring major revisions to functionality, and a huge expansion in who qualifies as worthy of Snapshot treatment, I don’t see how this will ever be anything more than another dead-end demo. I’ve probably written more about it in this blog post than will ever be written about it in the lifetime of Snapshots before it scrolls offscreen into the sunset.

2025-04-30 15:30:00

Category: text


Roku’s winning strategy is ads. What’s Apple’s? ►

Last week, Roku held a press event in New York where they unveiled their latest streaming devices, wireless cameras, and minor adjustments to their existing, content-driven interface. If you were hoping for a dramatic update to Roku OS, Lucas Manfredi has the disappointing details over at The Wrap:

The platform introduced a “Coming Soon to Theaters” row and personalized sports highlights. It also launched short-form content rows in the All Things Food and All Things Home destinations for users to easily find smaller curated clips, from recipe tutorials to home organization hacks. It also unveiled badges to help users differentiate between free, paid, new and award-winning content.

If you have used Roku devices or TVs recently these announcements seem disproportionate to the scale of the event where Masaharu Morimoto served sushi, and puppies were available for adoption.

Continue reading on Six Colors ►

2025-04-28 16:37:00

Category: text


Turbulent Global Economy Could Drive Up Prices for Netflix and Rivals ►

Scharon Harding’s post for ArsTechnica starts with the UK, but it gets around the globe.

For the US, the recommendation garnering the most attention is one calling for a 5 percent levy on UK subscriber revenue from streaming video on demand services, such as Netflix. That’s because if streaming services face higher taxes in the UK, costs could be passed onto consumers, resulting in more streaming price hikes. The CMS committee wants money from the levy to support HETV production in the UK and wrote in its report:

The industry should establish this fund on a voluntary basis; however, if it does not do so within 12 months, or if there is not full compliance, the Government should introduce a statutory levy.

Calls for a streaming tax in the UK come after 2024’s 25 percent decrease in spending for UK-produced high-end TV productions and 27 percent decline in productions overall, per the report. Companies like the BBC have said that they lack funds to keep making premium dramas.

This is all very ironic if you have been following the generous tax rebates that the UK provided to lure global production to the UK for pre-production, filming, post-production, etc. They are very generous subsidies that have all kinds of little rules and loopholes that get updated over time.

Instead of lowering those hefty rebates for international productions in the UK they want to tax subscriptions of international streaming services.

In a statement, the CMS committee called for streamers, “such as Netflix, Amazon, Apple TV+, and Disney+, which benefit from the creativity of British producers [Joe: and our enormous tax rebates], to put their money where their mouth is by committing to pay 5 percent of their UK subscriber revenue into a cultural fund to help finance drama with a specific interest to British audiences.[Joe: Unlike the content that they are paying to subscribe to which is definitely of no interest whatsoever to British audiences.]” The committee’s report argues that public service broadcasters and independent movie producers are “at risk,” due to how the industry currently works [Joe: We’re all trying to find the guy who did this!]. More investment into such programming would also benefit streaming companies by providing “a healthier supply of [public service broadcaster]-made shows that they can license for their platforms [Joe: Is there a term when something goes beyond double-dipping?],” the report says.

As Scharon notes, the same applies to Canada, which has eye-watering tax rebates in Montreal and Vancouver, but at the federal level, Canada wants to slap a 5% levy on streaming services. It’s just wild.

I know it’s rich for me, a guy in a country that turned tariffs on and off like a child playing with a light switch, to comment on the affairs of other countries, but I do feel at least a little affected by it.

It’s been one year since I was laid off from my VFX studio job after being furloughed for six months due to the dual labor strikes in the US, and a possible third. What production has resumed has mostly resumed abroad. Sound stages in LA are only at 63% capacity. Post production is even easier to do outside of the US, with my former employer exclusively hiring in other countries for positions like the one I did –but I did it in LA. This is a little less like trying to bring back coal mining jobs. I’m talking about jobs that were there 18-24 months ago.

It’s darkly funny (to me, anyway) to distort a market through tax rebates on foreign productions, then swoop in and demand money from subscriptions to foster locally-owned production. If a government wants to fund the production of local culture then they should funnel their money there to begin with by appropriately taxing foreign productions instead of trying to capture the revenue, after the fact, of the foreign system they’re supporting.

2025-04-11 17:10:00

Category: text


Adobe Wants AI to Help You Use Photoshop

Adobe has a new blog post up outlining their vision for “agentic AI” in Adobe’s products.

At Adobe, our approach to agentic AI is clear, and it mirrors our approach to generative AI: The best use of AI is to give people more control and free them to spend more time on the work they love – whether that’s creativity, analysis or collaboration.

We’ve always believed that the single most powerful creative force in the world is the human imagination. AI agents are not creative, but they can empower people – enabling individuals to unlock insights and create content that they wouldn’t otherwise be able to and enabling creative professionals to scale and amplify their impact more than ever. For people at all levels, agentic AI’s potential makes starting from templates feel stale and old-fashioned. For professionals, it offers a pathway to growing their careers by freeing up time to do more of the things only they can do.

From Abby Ferguson’s post on DPReview covering this:

Last week, Adobe announced that a handful of AI-based features would be moving out of Premiere Pro beta. Now, the company is teasing even more AI tools for Premiere Pro and Photoshop ahead of Adobe Max London on April 24. In a blog post, the company provides a basic overview of what’s coming, promising even faster edits and helpful tools for learning.

We certainly see “agentic” used a lot these days, but most of the time it’s the retail-fantasy scenario that an LLM agent will buy or book things on your behalf. Capitalism, bebe.

This is more like GitHub Copilot in VSCode where there is a back and forth, with a result that is still something the user has control over if they choose to. The work is in layers, with edits applied in a non-destructive fashion in many cases.

Back to Abby Ferguson:

Adobe says this isn’t exclusively about speeding up the editing process. Instead, it also envisions the creative agent as a way to learn Photoshop. Given how complex and overwhelming the software can be for new users, such a resource could be helpful. Plus, Adobe says it could also handle repetitive tasks like preparing files for export.

One of the major problems I have with generative AI for images and video is that the output is basically clip-art or stock footage. It’s smearing together associated patterns it was trained on and delivering a final result. The only way to continue to edit or refine the result is through text commands which can have sweeping changes on things you did not want to change, and have no easy way to control yourself.

Solutions that integrate with an image, or video, editing workflow allow for the level of control a person might need for their job. In many cases, where some doofus on LinkedIn is asking AI to make them look like an action figure package, or for Ghibli art of their dog, they don’t care about control at all, but that’s because it’s not a job. They don’t answer to a client that wants something nudged, not replaced.

There’s no easy way to directly link to the videos from Adobe’s blog post previewing these things, but the video you want to watch is the second one under the Photoshop subhead.

There are, of course, all of the other issues with generative AI, but this type of work from Adobe is far more interesting than making the smearing machine smear better, or adding new “styles” to the smearing machine so we can all have exactly the same “art”.

It’s important to remember that if everyone can “make” the same stuff, and they all have the same level of non-control over it then there’s nothing that really distinguishes that stuff. Here’s the potential to learn and decide on the changes being made branching off using your own brain and your own skills.

2025-04-11 13:15:00

Category: text


My Rube Goldberg Blog

My blog is made by a rube, if you will. This site is a static blog generator that’s basically a single 300-ish line Python script from 2014. I was “inspired” to make it when static site generators were all the rage and I figured it would help me with my Python skills, as Python is a language often used in VFX software. I don’t recommend people use any of my code, or write their own site like this. There are better ways to spend your time.

I’ve periodically written about the changes to my site over time, like when I made the jump from Python 2 to 3, and transitioned servers back in October of 2023. We’re overdue for an update. Especially as I have been tinkering quite a bit, and it happens to coincide nicely with ATP’s recent episode member’s-only episode on their own sites. I might as well start from the top.

Caddy

Caddy serves the files. I had originally used Twisted on my old server, because it’s in Python, but I could never figure out how to get the certificates working in Twisted and Caddy just works. I don’t even have to know Go. I can’t recommend it highly enough if you’re looking for a simple solution to just serve files.

Yakbarber3

The python script that makes this whole mess is yakbarber.py, and then when I converted it to Python 3, I named it yakbarber3.py so I could run the two versions to make sure the output was the same (it was close enough).

It takes markdown files from /content converts them to HTML and uses mustache templates to wrap the HTML with the necessary headers, and such. Also to generate the RSS/Atom feed. The results went into /output. That’s more or less how it worked for over 10 years, and is basically how it continues to work. You can see the commits for everything as I worked through various parts of it over the past month.

The site settings live in another file and were imported with the imp module. Yes, that module is deprecated in Python 3 and it would generate a warning every time it ran, even though it would run successfully. I asked Gemini or Copilot, I can’t remember which and it suggested switching it out for importlib.util.module_from_spec() and it’s fine.

That’s another thing that’s changed in 11 years. I used to have to search StackOverflow for a problem that matched the problem that I had, or was close enough, and then try to kludge together something that fit my needs. I never figured out how to properly replace imp but I’m sure someone wrote about it somewhere. Now I can query LLMs that have paid StackOverflow and they synthesize something that more closely matches my situation.

They’ve been total horseshit at making things from scratch, and I don’t just blindly accept their output, but they do help me when I get stuck on something I don’t know. More importantly, I can look at their solution and then figure out there’s a better way to do this. I’m not “vibe coding” but it is more of a back and forth.

My feelings on LLMs are nuanced, especially where creativity is concerned. Being able to get me through a roadblock is not the same thing as “make my whole web site” which I think is a valueless proposition. Indeed, I did ask it to write a static blog generator using the same specs as my blog and it wasn’t what I wanted. I don’t see the appeal in just letting LLMs dictate the whole thing.

Watchdog

Case in point: I wanted to add support for yakbarber3.py to monitor for changes to the /content directory, or the mustache templates, and then regenerate the site automatically without me having to run the script. The solutions were to either use inotify outside of my script to run it when files were changed, or to use watchdog (a python module which would be inside the script itself) to monitor for changes. I decided on watchdog because I want to continue to refine what files are read and written to disk in the future, and this seemed like the best way to achieve that.

I used Copilot in VSCode — everyone’s raving about the thing so why not try it? It did a decent job constructing what I asked for. It would monitor those directories for changes, and only for the specified file types.

class ChangeHandler(FileSystemEventHandler):
     def on_modified(self, event):
         if event.src_path.startswith(contentDir) and (event.src_path.endswith('.md') or event.src_path.endswith('.markdown')):
             print(f"Detected change in {event.src_path}. Re-running main()...")
             main()
         elif event.src_path.startswith(templateDir):
             print(f"Detected change in {event.src_path}. Re-running main()...")
             main()

     def on_created(self, event):
         if event.src_path.startswith(contentDir) and (event.src_path.endswith('.md') or event.src_path.endswith('.markdown')):
             print(f"Detected new file {event.src_path}. Re-running main()...")
             main()
         elif event.src_path.startswith(templateDir) and (event.src_path.endswith('.html') or event.src_path.endswith('.xml')):
             print(f"Detected new file {event.src_path}. Re-running main()...")
             main()

I needed to refine the sleep interval between times it started, and I also wanted to make sure that a bunch of successive changes that happened quickly wouldn’t cause the main() loop to be triggered. Copilot suggested a debounce timer.

    def schedule_main(self):
         global debounce_timer
         if debounce_timer is not None:
             debounce_timer.cancel()
         debounce_timer = threading.Timer(3.0, self.run_main)  # 3-second debounce window
         debounce_timer.start()

     def run_main(self):
         global is_running
         if not is_running:
             is_running = True
             try:
                 main()
             finally:
                 is_running = False

Like I could study reference materials about how to implement this, but again, I wouldn’t really know where to start or what I was trying to do without a lot of other programming experience that was not going to be relevant to the rest of my life. This tool doesn’t make me a professional programmer, it just helped me with a hobby project, and I know just enough about it to understand and edit it.

Oops, All Loops

One big problem that I had not anticipated was that the script was original written to modify variables as they fell down through the functions. That meant that when watchdog would start the loop over that several variables wouldn’t be reset and would behave in an unexpected manner. Copilot was no use for debugging this. It suggested I put print statements everywhere, but it was entire blog posts, so that’s too much printing to be useful.

However, I correctly diagnosed that the problem was my poorly written code from my original script. I separated out the parts that were writing over data, and I also reduced the number of places that data was being sorted. That was tricky, like untangling cables from that box of cables that you 100% keep meaning to individually bundle.

I couldn’t have vibe-coded my way through this, but it was useful to be able to highlight a specific line and ask Copilot what the change I made would do, just to double check my work.

Async

One of the recommendations Copilot provided —you can just ask for its “opinion” on things— was that I should asynchronously render the blog post pages (not the index or the RSS feed). This was my first time using Python’s async library so I got to see how my serial scripting turned into async scripting.

It’s mostly the same, but I get to use fun words like with lock and asyncio.gather. Instead of def I get to use async def. This was not as intimidating as I thought it was going to be, and I knew exactly where it would be applied.

Changes I Rejected

There were a lot of changes that Gemini and Copilot suggested that I just didn’t feel like implementing. Gemini would just rewrite swaths of it to where the code was unrecognizable. Copilot had a few occasions where it barfed on a change (it started to reprint the imports all over again in the middle of the file).

Both of them really wanted me to move from mustache templating to jinja2. They don’t really “want” it, exactly. There’s a higher probability that someone recommended jinja2 templating, which means that I get to have that suggested. I saw plenty of people say this online in 2014 when I picked mustache over jinja2. Mustache templating suits my needs and is very easy.

Rejecting the change is simple, but it’s interesting that dogma about template preferences could steer someone who didn’t know what they wanted to jinja2. No shade no lemonade on jinja2, but it can be overkill if your templating needs are relatively simple.

Sync_to_serve

Previously, I used a bash script named site_build.sh to run yakbarber3.py and then copy all the files from output and the template resources to the directory that actually serves the files. I don’t write the files my site generates to the directory that serves them because I want to reduce the chances that something will break and then my site won’t serve any files. Also I didn’t have the mime type and redirect set up correctly for joe-steel.com/feed so the old way I did it was to make a copy of the feed.xml file that just didn’t have a file extension. Very hacky.

This superstition about keeping site generation and site serving separate meant that I wanted to replace the bash script for the always-running, always-looping yakbarber3.py to monitor for changed files to copy over. Since I was already using watchdog to do that in the site generator script, it seemed natural for me to continue with that. Thus, I put together sync_to_serve.py.

import os
import shutil
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import fnmatch

SYNC_DIRECTORIES = {
    "output/path": "server/path",
    # Add more source-destination pairs as needed
}

FILE_PATTERNS = ["*.html", "*.css", "*.xml"]

class FileSyncHandler(FileSystemEventHandler):
    def __init__(self, source_directory, destination):
        super().__init__()
        self.source_directory = source_directory
        self.destination = destination

    def on_modified(self, event):
        if not event.is_directory and self.matches(event.src_path):
            self.sync_file(event.src_path)

    def on_created(self, event):
        if not event.is_directory and self.matches(event.src_path):
            self.sync_file(event.src_path)

    def matches(self, src_path):
        filename = os.path.basename(src_path)
        for pattern in FILE_PATTERNS:
            if fnmatch.fnmatch(filename, pattern):
                return True
        return False

    def sync_file(self, src_path):
        try:
            relative_path = os.path.relpath(src_path, self.source_directory)
            dest_path = os.path.join(self.destination, relative_path)
            os.makedirs(os.path.dirname(dest_path), exist_ok=True)
            shutil.copy2(src_path, dest_path)
            print(f"Synced: {src_path} -> {dest_path}")

            # Special case for feed.xml
            if os.path.basename(src_path) == "feed.xml":
                feed_dest_path = os.path.join(self.destination, os.path.dirname(relative_path), "feed")
                shutil.copy2(src_path, feed_dest_path)
                print(f"Synced: {src_path} -> {feed_dest_path}")

        except Exception as e:
            print(f"Error syncing {src_path}: {e}")

if __name__ == "__main__":
    observers = []
    for source_dir, destination_dir in SYNC_DIRECTORIES.items():
        event_handler = FileSyncHandler(source_dir, destination_dir)
        observer = Observer()
        observer.schedule(event_handler, source_dir, recursive=True)
        observer.start()
        observers.append(observer)

    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        for observer in observers:
            observer.stop()
        for observer in observers:
            observer.join()

This is significantly longer than the bash script, of course, but pretty straight forward in that it does a one-way copy, like the older script did. It just does it continuously.

CSS and JavaScript

The other thing that I’ve changed on the site is the CSS and some JavaScript. I hate JS. It’s this weird thing that looks sort of like python, but behaves in different ways. I had a very specific thing that I wanted to do, which was format the date and time displayed on each post to be localized, and also pleasant to look at. Gemini instantly kicked out the code I needed and I added it to my template.

 function localizeDates() {
  const dateElements = document.querySelectorAll('.date date');

  dateElements.forEach(dateElement => {
    const datetimeString = dateElement.getAttribute('datetime');
    if (datetimeString) {
      const date = new Date(datetimeString);
      if (!isNaN(date)) {
        const localizedString = date.toLocaleString(undefined, {
          year: 'numeric',
          month: 'long',
          day: 'numeric',
          hour: 'numeric',
          minute: 'numeric',
        });
        dateElement.textContent = localizedString;
      } else {
        console.error("Invalid date format:", datetimeString);
      }
    }
  });
}

// Call the function when the DOM is loaded
document.addEventListener('DOMContentLoaded', localizeDates);

That only took me a decade to get around to. It’s much better now.

Likewise the CSS problems on my site were always around. I had separate font-sizes for mobile and desktop. Sometimes people would reach out to tell me that my site was rendering incorrectly, for example when they used StopTheMadness in Safari. The developer, Jeff Johnson, helped me figure out what the problem was. I had never set:

<meta name="viewport" content="initial-scale=1.0">

So in a regular mobile Safari browser it had some weird scale which I had worked around, and now needed to undo that. It felt like a huge relief to clean that up.

Post Formatting Automations

Now that I had a site that could monitor for changes and regenerate itself without me having to sign in through SSH, I needed to make posting easier. One of the irritations with posting is that I have YAML front matter in my blog posts that needs to have certain information present.

Title: My Clever Title Here
Date: YYYY-MM-DD 24:00:00
Author: joe-steel
Category: text

There are optional YAML entries for Link which formats the post as a link-blog post with a permalink, and makes the header point to the original source. The other YAML is Image which sets an opengraph image to use for the post if I want one.

The date and time are the annoying part as the rest of it is just copy and paste. I have repeatedly made silly mistakes when I format the date and time just due to human error.

I don’t want the blog generation to write the date and time for me because I don’t want it to accidentally overwrite things in the files —right now it’s read-only when it works with the markdown.

I do want this to be multi-platform so I can write posts when I’m not with my computer. That means I need iOS support. That means Shortcuts. Sigh.

That was what that whole post complaining about time zone conversion was about. I finally conquered it and it could produce the formatted date and time on command, but it was a pain. Also I couldn’t figure out how to easily do it for a file that I had already written. It was easy to write out this stub, but I didn’t want to do that when I started writing or the date and time would be off by the time I had finished.

Drafts

That’s when I turned to Drafts. Greg Pierce does an amazing job with Drafts, but I use approximately 0.1% of its full potential. I had never successfully automated anything in it before.

Fortunately, Drafts can run JavaScript, and while I hate JavaScript, I know JavaScript can very easily handle the date logic. To protect all of the little babies, Apple doesn’t let us run JavaScript in Shortcuts unless it’s JS on an active page in Safari.

I also know that I don’t even have to write the JS. Just explain the logic to Gemini, or whatever. I also know JS can split the lines to turn the top line of the draft text into the title, and then everything else goes below as the body. It’s also amended so that it can handle the extra YAML, or if I run it again on a file that already has a header.

let options = { timeZone: 'America/Los_Angeles', year: 'numeric', month: '2-digit', day: '2-digit', hour: '2-digit', minute: '2-digit', second: '2-digit', hour12: false };
let now = new Date();

// Round to the nearest 5-minute interval
let minutes = now.getMinutes();
let remainder = minutes % 5;

if (remainder < 3) {
  now.setMinutes(minutes - remainder);
} else {
  now.setMinutes(minutes + (5 - remainder));
}

now.setSeconds(0, 0); // Reset seconds and milliseconds

let date = now.toLocaleString('sv-SE', options).replace(/:[0-9]{2}$/, ':00');

let title = draft.title;
let body = draft.content;
let lines = body.split('\n');

let yamlTitle = null;
let yamlDate = null;
let yamlAuthor = null;
let yamlCategory = null;
let yamlLink = null;
let yamlImage = null;
let nonYamlLines = [];

// Check for existing YAML-like lines
for (let i = 0; i < lines.length; i++) {
  let line = lines[i].trim();
  if (line.startsWith('Title: ')) {
    yamlTitle = line;
  } else if (line.startsWith('Date: ')) {
    yamlDate = `Date: ${date}`; // Update Date
  } else if (line.startsWith('Author: ')) {
    yamlAuthor = line;
  } else if (line.startsWith('Category: ')) {
    yamlCategory = line;
  } else if (line.startsWith('Link: ')) {
    yamlLink = line;
  } else if (line.startsWith('Image: ')) {
    yamlImage = line;
  } else {
    nonYamlLines.push(lines[i]); // Keep all other lines, including empty ones
  }
}

// Remove leading empty lines from nonYamlLines
while (nonYamlLines.length > 0 && nonYamlLines[0].trim() === '') {
    nonYamlLines.shift();
}

// Construct the new YAML section
let newYaml = [];
if (yamlTitle) {
  newYaml.push(yamlTitle);
} else {
  newYaml.push(`Title: ${title}`);
}
if (yamlDate) {
  newYaml.push(yamlDate);
} else {
  newYaml.push(`Date: ${date}`);
}
if (yamlAuthor) {
  newYaml.push(yamlAuthor);
} else {
  newYaml.push(`Author: joe-steel`);
}
if (yamlCategory) {
  newYaml.push(yamlCategory);
} else {
  newYaml.push(`Category: text`);
}
if(yamlLink){
    newYaml.push(yamlLink);
}
if(yamlImage){
    newYaml.push(yamlImage);
}

// Combine YAML and non-YAML lines
let newBody = newYaml.join('\n') + '\n\n' + nonYamlLines.join('\n');

draft.content = newBody;
draft.update();

This took significantly less time than trying to get Shortcuts to do this. It does mean that I have to use Drafts if I want access to this, but if I’m on the go, I’d likely be writing a short post, which could be in Drafts. Then I save that to my /content folder in Dropbox, yakbarber3.py sees the file, and regenerates the site, which sync_to_serve.py copies to the directory that serves the static files. It’s a simple 5-ish step process with most of it automated.

The moment where all this worked was when I wrote my link-blog post yesterday. I was a few thousand feet in the air and it published. I even had a mistake, where there was an errant new-line that pushed the Link: YAML down a line and it processed it as a regular post. I fixed the file, and it just regenerated as the expected link-blog style post it was supposed to be.

Future Plans

In the future, I want to handle draft posts with status in the YAML for draft or published. Then it can generate the page so I can check formatting without it adding to the RSS or index.

I also want to have YAML defaults for certain tags, like author, and category. It is extremely unlikely that anyone else will be writing on my blog, or I’ll use a category other than text. This YAML is a legacy of my original export and import process where I used Pelican to get my very old Tumblr posts. It’s superfluous.

I also want to handle scheduled posts with the date YAML. Where it will hold the site generation step until the time stamp is reached. Sometimes I finish writing something at night but I know no one’s going to read it, and it’ll get buried under new posts in the morning. For that to work, I really can’t have human error in the stamps.

Mastodon integration would also be nice where the blog has its own account. I might use Robb Knight’s echofeed. I couldn’t even explain why I’m not using it already.

Lastly, I need to come up with a better system to handle images. Whenever there are images on this blog I have manually uploaded them to directories on my server and then copied the paths back to put into the blog posts. It’s a very annoying step. I don’t want to just throw a bunch of images into /images because that will just be a mountain of clutter. I can always make more YAML —more YAML, more problems. I could have named subdirectories in /images that use a relative path so I could see the images in draft previews, and count on them to be copied correctly. That creates two copies of all my images though.

The important thing is that I can keep screwing around with this. It is very satisfying when I get some part of this working the way I want it to, and that makes all the frustrations worthwhile.

Again, I don’t recommend anyone do any of this, or use anything that’s here. Use Ghost, or even another off-the-shelf platform that has support so you don’t have things standing in the way of you writing your blog. Think of all the blog posts I could have written if I wasn’t trying to make sure my mustache was asynchronous?

2025-04-02 14:55:00

Category: text


My Unsuccessful Journey Into Netflix’s Ad Tier ►

Jason Snell wrote about how he rejoined Netflix on the ad-supported tier, and it was a poor experience for him. Not only because he’s not used to seeing TV shows with ads these days, but because Netflix shows aren’t always made with ads in mind.

While the ads played on, I began creating a thought experiment: There’s a $10 difference between the ad and ad-free plans. If Mr. Netflix (he wears a top hat) came to my house and said, “Jason, I’ve got a great deal for you. I’m going to pay you $120 a year, and all you have to do is watch ads while you watch Netflix,” what would I do? When I started thinking about it, I thought it might be an interesting intellectual question. What would I accept in exchange for having Mean Mr. Netflix beam ads into every show I watch?

It’s worth also thinking about every other streamer too. I know Jason’s not singling out Netflix as if it’s the only one doing it, but it certainly charges the highest premium to escape its ads.

I absolutely have to pay for YouTube Premium because the quality of the ads is so poor, not because the cuts don’t fit with the drama. Other people are used to the ads in YouTube because that’s what that experience has “always been” for them.

Also consider the gentle buzzing of incessant ads slotted into old reruns made for ad-supported broadcast and cable TV on FAST and AVOD services. It can have a very different feel because of what your personal expectations are, and your level of engagement with the programming. You’re folding laundry, so who cares how many times that Skyrizi jingle runs on the Gunsmoke channel? It’s very different from a show like Adolescence.

We know Mean Mr. Apple (he has a mostly unbuttoned shirt) might offer an ad-supported tier too, and their programming hasn’t been made with that in mind. The Apple brand isn’t really about advertising other products, companies, and services. I remain curious what that experience will be like one day when a tense moment in Silo transitions to an ad for a local accident attorney or pharmaceuticals.

2025-04-01 16:15:00

Category: text


Apple Shortcuts and Time Zones

Here’s another “Joe complains about Shortcuts” post for you. Last time I complained about not being able to reverse a list. This time I’m complaining about how Shortcuts handles time zones. I know time zones are the bane of every programmer’s existence, but I’m not going to give the Shortcuts team a break because they have all the time zone data, they just implemented it in the worst way they could think of.

First, let’s talk briefly about the part that’s not Shortcuts so you can understand where it will slot in.

I automated part of the Python script that generates my blog to do so without me having to log in over SSH and run the script myself. More on that in a future post. This was so I could write posts on-the-go. Here I am, posting al fresco, from my iPhone. The thing is, I wanted it to generate the date and time that I include in the YAML front matter of my blog.

Here’s an example of the YAML, a yample, if you will:

Title: [Title Here] Date: 2025-03-30 14:55:00 Author: joe-steel Category: text

It’s not a wild date and time format. It’s ISO 8601, and the system is expects it localized to my home time zone, which is in Los Angeles. That can either be PDT or PST, and it’s handled flawlessly by Python’s pytz library without any fiddly intervention from me. I want to be able to write the time out myself by hand if I so choose, and have the time and date be recognizable for me.

Shortcuts can generate the current date and time from the Date action. It can even be formatted to ISO8601 with the Format Date actions built-in “ISO8601” setting. Except there’s one teeny tiny issue: There is no way to get the current time in Los Angeles.

The Convert Time Zone action can only convert from cities in the predefined list of cities that are either capitals, or important cities. It doesn’t understand named time zones, like PST or PDT. So if you thought you were going to be clever about this by using the named time zone from the Format Date’s “Z” output then you’re going to be disappointed. It’ll say EST or EDT and there’s nothing you can do with that information at all in Convert Time Zone.

Who cares about named time zones, amirite? Picking cities is way better. Except, Convert Time Zone can’t understand or interpret just any city either. For example: If I was in Orlando, that is not in the list or preprogrammed cities, and can’t be used, but Miami, New York, Washington D.C., etc can be used. You can’t feed it Orlando as text input or anything. It will fail.

This is extremely unhelpful.

To get around this I used Date which was fed to Format Date, that was set to just get the GMT offset from the current time date-time output with “Z” and then regex (shudders) to get only the hour. That gets multiplied by -1, and fed to Adjust Date which adjusts the original Date object to remove the offset. I now have a date-time object with no offset.

I knew “Los Angeles” was in my previous list of cities so I could now use a Convert Time Zone action to convert to Los Angeles but I couldn’t pick “London” - it has it’s own time shenanigans, and there’s no Greenwich, or GMT. I asked Gemini which city has no offset, and it’s Accra, Ghana.

Save that little nugget for your next trivial pursuit game. Now that it was converted by math to GMT I could convert by “user friendly” Convert Time Zone from Accra to Los Angeles and then format the output any way I liked.

More like Apple Longcuts, amirite?

When I posted about all this on Mastodon, Elliot Shank pointed out that there was no need to use Accra because GMT was an ancient relic and I should be using UTC. There is, in the list of cities, UTC. Sure, when you search GMT it should probably return UTC, but that’s probably incredibly difficult to implement in a search function.

However, all that I got from that was that I could change Accra to UTC. I still needed to do all the rest of this bologna to get the current time in Los Angeles. So… my complaint stands.

Stephen Robles had another suggestion to use a web site’s api, but it’s just another approach that uses as many round-rect blocks of user-friendly time wasting as mine.

Am I asking for too much? Am I reaching for the stars here? Shortcuts is our only first-party automation platform on iOS, and only cross-platform one that can work on all of Apple’s platforms and it’s still lacking in the basics. You can’t find anything you’re looking for, and because of its nature you can’t easily find help.

Shortcuts has the thin veneer of user-friendliness applied to it, but under that it’s not just unfriendly many things simply aren’t possible and there’s no way of knowing. You just have to build a thing and grope around for anything that seems remotely applicable. I can’t believe the future of automation is tied to App Intents which are tied to Shortcuts and Siri. How will any of that help when Shortcuts can’t get the basics right?

2025-03-31 19:04:00

Category: text


Wish List: Siri, Spotlight, and a Unified Search Experience ►

I know, linking to myself is very Obama-awards-Obama meme. This post is relevant if you’ve been keeping tabs on what I’ve been writing about on my blog recently. Those posts here helped me see a pattern. I’m not sure if anyone else sees a pattern, or if I’ve just made a mess with red string and a pile of mashed potatoes that looks like Devil’s Tower, but I like to think it makes sense.

So here’s a thought for those who might suddenly find themselves in charge of Siri: Search is a foundational element of smart assistants, and the current state of Apple’s search technologies leaves much to be desired.

While all today’s web search engines are placing sparkly and unreliable AI-synthesized answers above everything else, they still generally deliver solid search results underneath. Refining Siri without bolstering the foundation is a recipe for disaster.

Everyone’s making the magic box on top of their results, and Apple’s trying to only make the magic box, sans search results. Let’s get some ye olde heuristics in here first.

Matt Birchler has a tangentially related blog post about LLMs that he put up yesterday while this was being edited. I’ll link to it here for this passage:

This is one example, but I’ve also seen people try to use ChatGPT as a calculator or Claude to give them the weather at their current location, and I sigh because these aren’t the right tools for the jobs. That said, the Google search box has been super powerful at training people to expect search similar fields to behave the same. It’s not exactly better when these LLM search boxes often look like Google and have “ask anything” as the placeholder text.

The simple fact of the matter is that not everything is best solved by an LLM, but LLM chatbots give the impression that they are good for everything. A further problem is that LLMs have a hard time saying they don’t know something, so they’ll always give you an answer whether it’s right or not. This is why I find Google to be in such a spectacular position to have the best of all worlds. It knows where you are (if you let it) and can tell you the weather right now, it scrapes the web constantly so it can give you news that literally broke a few minutes ago, it can show a calculator if you ask it to do some math, and yes, it can detect if an LLM would be best suited to respond to your query and use the LLM for that specific case. That LLM is also backed by the most powerful search engine ever and can parse real time data just like ChatGPT or Perplexity, but with even better search results.

Substitute LLM for Siri and you have something akin to what I’m saying, so I don’t feel like a total crank.

I’m proud of my post as I think it makes the case well (no string and mashed potatoes). The advantage of writing for Six Colors is that Jason Snell, as Editor Supreme, can tell me what’s not working, especially when it’s the whole thing.

I totally rewrote my piece based on his feedback on an earlier draft that was too scattered and didn’t make the point I was going for. He did a nip and tuck on that draft too. I kept getting bogged down with this example and that example, and it just didn’t need it. If you think I don’t have enough examples then, oh boy do I have a folder full of screenshots for you. I’m always glad to go through this process. Hopefully it’ll make me a better writer someday, and not just a guy who rants on the internet.

2025-03-21 11:20:00

Category: text


Who’s the Laggard? Comparing TV Streamer Boxes ►

A while ago Jason Snell said that he would do a streamer box/dongle shoot-out on an episode of Upgrade, and today’s the day he hit publish. It’s well worth your time to read it, particularly if you aren’t as familiar with “the state of the art” in the streaming landscape. The last time I did a “device shoot-out” was in 2016, and none of the platforms are the same. I certainly stopped recommending Fire TVs, and only recommend Apple TVs.

My usage of the Fire TV completely fell by the wayside as they overhauled the interface to include more and more advertising. Prime Video also has ads. Everything. Has. Ads.

That Jason Snell was able to get an ad for a local mattress retailer is a pretty clear indicator that Amazon’s insatiable appetite for advertising has only increased. I even thought about reconnecting my Fire TV 4K stick so see how bad it’s gotten, but as Jason pointed out, no one’s paying me to use one.

I’ve never had personal experience with Google’s platform, and that is perhaps the most interesting section in Snell’s overview —from the perspective of someone that mostly uses Apple products. It’s not that I’m considering picking one up, but it’s just interesting.

I would caution Apple fans from skimming this and coming away with only the comforting affirmation that the Apple TV is the frontrunner in TV streaming boxes/dongles. Snell’s clearly demonstrated that the Apple TV isn’t a laggard, but he also outlined where the competition has a leg-up on Apple. Mark Gurman’s original racing analogy doesn’t work when you’re talking about devices with many features that each can excel or fall behind.

Jason’s conclusions about organizing apps and media in the same place, and about the need for a comprehensive live-guide are hardly shocking to me since that’s basically the drum I’ve been banging on for years. Maybe someone at Apple will be receptive to Jason’s comparisons?

While it’s easy for Apple fans (who are predisposed to not like, or expect ads) to point to the ads in Apple’s competition as a sign of poor quality in and of themselves, it’s worth remembering that people have different thresholds for frustration and trade-offs they will tolerate. Like I outlined in my piece for Six Colors about FAST, some people accept ads in an array of forms if their TV viewing experience is easy or inexpensive.

On a person-by-person basis there are certainly lines for what’s too much advertising, but no agreed upon quantitative or qualitative metric. Each person knows the “too much” line when they see it, and the companies all see numbers go up, or down, against other dollar signs, and engagement patterns before they decide to pull back, or push forward.

You know what they say: One man’s trash is another Mancini’s Sleep World.

2025-03-20 18:00:00

Category: text


Exclusive: Apple Unveils Shamrock M4 MacBook Air

Product photography of the M4 MacBook Air with the lid slightly open. It's all very subtly greenish silver on a greenish white background.

It certainly took me by surprise when Apple contacted me, of all people, to be the sole news outlet to run the story about the fifth MacBook Air color being added to the recently updated line-up. Certainly they were contacting me out of deep respect, and not as a prank, right? Second, I didn’t know why this color was released so soon after the line was refreshed.

When I hopped on the WebEx call with Apple (my first, and probably only, ever time using WebEx so I savored it) I asked directly why there was a gap in product releases. I’m unclear on what part of their response I’m supposed to say since I just scribbled “Is this on background???” in the margin and never asked. It would seem that Apple wanted to release this color in celebration of St. Patrick’s Day, or possibly as a condition of their settlement with the European Commission over using Ireland to dodge taxes. Or maybe it was just a coincidence? As an aside, I told them that I was one quarter Irish on my dad’s side of the family, but they didn’t have much of a reaction.

There was a mild smile, and tiny shake of the head side-to-side when I asked if this was just because people thought all the silvers are too similar. They didn’t say anything though. It was just very tense.

When I looked at the product photos Apple provided me I had to ask if they hue shifted the background and screen to be green in Photoshop or if it really was green. They assured me that there’s no mistaking this “gorgeous new shamrock”. That part they definitely told me to write down.

The lid of the M4 MacBook Air that is very slightly green
Maybe you have to see it in person?

They were also excited to tell me that there is also a color-matched charging cable for the new hint o’ mint. All of which is available today through the Apple Store to customers.

Product photography of the charging cable that is subtly green.

I wondered if buyers who recently purchased a sky blue MacBook Air would feel left out, and need to trade-in their product soon, but that’s when Apple surprised me with their latest green initiative. Apple said that while a midnight M4 MacBook Air buyer would have to use the traditional product exchange process any customer who bought a M4 MacBook Air will be able to instantly exchange it for shamrock. They won’t have to go to the Apple Store, ship products, or deal with wasteful packaging.

This product swap is entirely carbon neutral and done with energy offsets only. It seemed too good to be true, but any customer that purchased their M4 MacBook Air in sky blue, starlight, or silver will receive a new receipt by email that shows that they bought a shamrock MacBook Air, and also a new green desktop wallpaper. I don’t effectively know how that changes the color, but they said it definitely does, and then said “carbon neutral” again.

I look forward to seeing the machine in person, and especially in the vicinity of other colors, so that the bold new hue will hopefully pop —or seem like it’s more than a fourth silver.

2025-03-17 07:00:00

Category: text