Bluffing Your Way to Success

The release of the Sora slop-feed monstrosity has really done some shock and awe on the general public and on the press. People are wowed by the fidelity and the stability of the product, especially when compared to what previous iterations of Sora, and models like it, have been able to generate. This fidelity has also lead more people to ask questions than they were initially when it was mushy/sliding/morphing junk.

It’s still a stock footage generator, but it’s trained on a significantly larger corpus of video. Video OpenAI does not actually own any rights to. This increase in volume and diversity means there are fewer situations where the model creates something that has an obvious malfunction in those 10 second windows.

There is no logical reasoning or thinking component. No creativity, experience, or crew that went to go shoot something special just for you. This is reconstituted from bits and pieces. The seams may be unrecognizable to most viewers, but the content of the video is always a reworking of something else based on probability.

No better place is that illustrated than the disastrous deal with Runway and Legendary. They’re limited by the model, and their inputs, thus producing nothing but problems.

OpenAI is pulling this off because of a three-prong strategy:

  1. Tell the rights-holders that stealing everything is inevitable, so it’s a good thing OpenAI did it first because they have tools for you to ask them politely to have rights and likenesses excluded from output not excluded from training.
  2. Generate demand with the public through apps like Sora where people can make brand-safe videos of themselves with corporate characters. You can see it in the name cameo (which I am sure Cameo loves). No marketing team can ever do that without building and troubleshooting their own model.
  3. Invite rights-holders to see how they could use the tool to quickly produce “content” or marketing materials without having to pay for employees. Just pay OpenAI where the value is in the model that has stolen the rights.

It’s brilliant in a super-villain kind of way. Like Mr. Burns blotting out the sun.

Sam Altman, on his blog:

First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.

We are hearing from a lot of rightsholders who are very excited for this new kind of “interactive fan fiction” and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all). We assume different people will try very different approaches and will figure out what works for them. But we want to apply the same standard towards everyone, and let rightsholders decide how to proceed (our aim of course is to make it so compelling that many people want to). There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration.

Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we [sic] want both to be valuable.

Hayden Field, writing for The Verge, about a Q&A event with Sam Altman:

He positioned the launch’s speed bumps as learning opportunities. “Not for much longer will we have the only good video model out there, and there’s going to be a ton of videos with none of our safeguards, and that’s fine, that’s the way the world works,” Altman said, adding, “We can use this window to get society to really understand, ‘Hey, the playing field changed, we can generate almost indistinguishable video in some cases now, and you’ve got to be ready for that.’“

Altman said he feels that people don’t pay attention to OpenAI’s technology when people at the company talk about it, only when they release it. “We’ve got to have … this sort of technological and societal co-evolution,” Altman said. “I believe that works, and I actually don’t know anything else that works. There are clearly going to be challenges for society contending with this quality, and what will get much better, with the video generation. But the only way that we know of to help mitigate it is to get the world to experience it and figure out how that’s going to go.”

Inevitability is a terrible justification for anything. It’s a fantastic way to drive a wedge between different stakeholders though! “Oh well if it’s going to happen no matter what then I have to be on top…”

We took all of your control away from you, and we will let you have some of it back, if you agree not to fight us.

Even if rights-holders were to capitulate completely and sell out their intellectual property for access to their intellectual property with fewer employees, then there’s still the question about what value the general public assigns to this slop and the users of it?

In my previous posts on this subject I’ve pointed out that the general public doesn’t especially care for anything that seems artificial. That’s very true of movies where even blockbuster film franchises will talk about trying to get something in-camera, or building all the sets and costumes, even if it’s not entirely true, because it’s just better marketing.

You’re standing next to a person in a Pikachu costume, you pose for a photo, and you have that photo of you with “Pikachu” but now you can have a video of you with cartoon-perfect Pikachu that you don’t need to even record. Instead of a memory of something janky that can only ever be so real, you have a high-fidelity non-memory of an unreality.

I assume Altman is banking on just overwhelming the public consciousness through brute force with these slop videos to the point where the public’s sense of what’s human-made is so unreliable that there can’t be any kind of pushback.

2025-10-08 16:10:00

Category: text