Unauthoritative Pronouncements

Subscribe About

4.0666K Retina iMac

The 4K iMac came out. When this resolution was originally mentioned in the rumors, I was a little confused. 4096 x 2304 is not what all the TV people call 4K, or even what the iPhone 6s and 6s Plus shoot. That 4K is the UHD resolution of 3840×2160. Apple doesn’t ship any display, on any device, that is 3840x2160.

We all watch scaled video, all the time, and that’s no sin. But it would be nice if Apple would ship something that was 1:1. If you watch “4K” video on your “4K” iMac, every pixel is being resampled to scale up by 1.0666.

I am excited about the updates to the range of colors that can be displayed. From Jason Snell’s review for Macworld:

Apple says that the display in this 4K iMac, as well as the revision to the 5K iMac that was announced the same day, offers an expanded color space. Thanks to new red-green phosphor LEDs, the displays can display a wider range of red and green light than before, allowing them to display 25 percent more colors.

In a demo at Apple, I was able to detect subtle differences. The new displays can offer more color detail and more vibrancy than the display on the older 5K iMac models. I’m a little red-green color blind, and even I could detect the differences. If you work in graphics or video, you’ll probably be happy to have access to a display that’s capable of displaying 99 percent of the P3 color space.

P3 is DCI-P3, which… Well, go read this, and then this.

2015-10-13 09:30:00

Category: text

I Once Was in Maps, but Now I'm Found

I was very excited about Apple Maps when the rumors surrounding its launch circulated in 2012. The Google contract for map data was going to expire but the company that deeply respects its customers would have something amazing. I cautiously updated my iPad (3) to iOS 6 to test it out, and didn’t update my iPhone 4 to iOS 6 until the following January, when the Google Maps app was available. There’s no denying how bad it was, because Tim Cook even wrote a statement apologizing for it.

I keep testing out Apple Maps every few months to see what’s happening with it – Usually when major updates are announced, or when some tech-pod-blog-ocaster mentions a positive experience. I would like to be pleasantly surprised, and then remove Google Maps. They are a company that specializes in personalized ad tracking, and here I am, shoving my location in their face.

Instead, here we are, three years later. Still working around Siri, and working around apps that integrate Apple Maps (like Yelp), and copying and pasting addresses in to Google Maps.

Works for Me

When I have discussed my issues with Apple Maps in the past I’ve received feedback that was not entirely helpful. Informing me about how well Apple Maps works in places I am not has done very little to improve how Apple Maps works here.

Telling me that I can report issues from inside the Maps app is something I already know, and have talked about in the past. It is safe to assume I have reported the complaints detailed below. Expressing my disappointment is not mutually exclusive to reporting issues. They’ve spent three years working on Maps, and made a big song and dance number about it at WWDC in June. I am not being overly critical of some just-launched beta from a startup.

However, I must apologize for ignoring the earnest advice to move somewhere else. I’m sure if I moved somewhere my job wasn’t then the quality of a maps and directions app would be unimpeachable.

Location, Location, Location?

There have been several improvements to location data both in the map view itself, and from the various iOS features which hook in to apps. Certainly this is much better than when the service launched with multiple, conflicting addresses for locations, and conflicting Yelp reviews. Even last year, when I bought my iPhone 6 with iOS 8, the location data was still pretty wonky. One particular test I conducted was asking for directions to “City Hall”.

The Maps app now correctly lists the city hall for the city I’m in right now (Los Angeles) as the first suggestion when I type the query, and follows it up with the other halls that are closer to me, but not the city hall of the city I’m in. For some inexplicable reason, “City Hall” in Philadelphia, Pennsylvania is still listed as a suggestion. At least it is no longer the primary result.

However, if I hit “search” without selecting one of the suggestions, I receive “City Hall, London”. Not to spoil any surprises, but if you click the little car icon for driving directions the app will regretfully inform you that it could not determine a route.

Confusingly, when I ask Siri for directions to city hall, I am presented with a list of locations that’s completely different from those suggestions the Maps app generates, and one of them is a gospel hall.

I don’t understand why these results are all different. Even if it’s struggling with the generic term, it should have the same struggle everywhere.

What about “Downtown LA”?

No, Apple Maps. I can’t even.

On a more positive note: Someone at Apple accepted one of the many reports I’ve been sending about my apartment address. I check every six to eight months, so I’m not exactly sure when they fixed my address. I no longer need to get route guidance to my neighbors’ buildings, and I can use Siri commands like, “Give me directions home”. This is a very exciting development for me.

The other location improvements have less significance to me, but I do still miss Google’s Street View. If you tap on an address that has no Yelp data you get a spartan, white page with a slowly rotating satellite view of the street, which is useless.

Categories for nearby locations are very nice, and I haven’t come across anything miscategorized, but I have a nit to pick about it all being radial to your location, and you can’t build a route. This is tricky, I know, but gas stations along my commute are more helpful than a radial cluster of pins from wherever I am along that commute. Waze has these sorts of features, but I had poor performance from their app and removed it in frustration. It would be nice if Siri could leverage the categories, and the route I’m on, as if it was some kind of virtual assistant.

LA Transit

(Update: I took Apple’s announcement at WWDC that iOS 9 would have transit support in LA to mean it would be included when iOS 9 launched. According to a page buried on Apple’s site, they have not rolled out Los Angeles support yet.)

Transit directions are of academic interest to me, but I have no real ability to test them other than trying a few random queries. While Apple has made arrangements for public transit data in LA, it seems to be very incomplete. No bus routes are available when I conducted some searches on the Westside. I also conducted a search from Culver City to the Los Angeles Convention Center, which should have shown me the Expo Line light rail route, but instead I got the same error as the buses. “Transit directions are not available between these locations. View routing apps.” I can select the installed Google Maps app and I’m presented with the Expo Line. It doesn’t even flinch. Maybe there are some routes that function in LA, since Apple listed LA as a city they would have this data for, but I can’t think of any.

Perceived Slowness

Since the Apple Maps app was introduced, the slow, whimsical animations have bugged the crap out of me. You don’t always see them, sometimes it just snaps directly to the relevant view. Other times it involves a lot of slow pans and zooms. I don’t need to see all of North America while the app slowly zooms in to where I am. I am pretty aware of where North America is in relation to me.

3D also seems to add some processing weight to the situation, which I don’t really need. Building height is one of those metrics they got from their aerial mapping that’s neat in demos, but doesn’t serve any purpose in the real world.

The flyover stuff… I mean, I guess it’s cool? But I don’t use it. It’s always in Apple’s marketing, but why? It doesn’t help you do anything. And it still has weird spots that … I just don’t know why this is a marketable feature.


It’s important to keep your eyes on the road. Glancing at certain elements of your console for vital info is a necessity. Using thin weights for the display of information in a navigation app is just dumb. At a glance, you can see the number of miles to your next turn, or decimal value thereof, and an icon representing the kind of turn you will need to execute. White bars float over streets, but you can’t read them, and the street you’re turning on to is so tiny and waif-like that it might as well not be there. A thicker weight is used for the time, but again, a small size makes it hard to read clearly in a split second. Things also wouldn’t need to be so small if they weren’t all crammed in the top bar.

Even the icon for the turn you need to execute can be comically wrong.

Google Maps does a much better job at communicating at information at a glance. The top shows you an icon for the turn you need to make, as well as icons for lane guidance, and the name of the street you are approaching, instead of putting the emphasis on the number of miles you’re traveling to the turn. An estimate for the remaining time to travel is also available in large, thick numbers across the bottom of the display.

Google also changes aspects of the display based on traffic information, but more on that in the next section.


One of the things I’ve found puzzling about the design of the Apple Maps interface is that you can see traffic, and travel estimates supposedly influenced by traffic, in the route overview, but no traffic information is provided when turn-by-turn is on. All the roads are tranquil, neutral tones, and a serene blue path flows before you. It’s as if you’re in a kayak, on a river, being gently pulled along by the flow of water.

That’s not true, of course, because why would there be that much water in Los Angeles?

At heavy intersections, like Highland Ave. and Franklin Ave., you see no information about the flow of traffic in any direction. Instead of blue, you should see the streets run red with the blood of the Traffic God. Woe betide thee that commute on his most sacred of poorly designed intersections!

Tonight, Apple Maps routed me down Cahuenga to Highland. That sent me past the large, somewhat famous, amphitheater known as The Hollywood Bowl. Not a big deal, unless there’s an event at The Bowl. Guess what? There was an event! Van Halen! There were orange, safety cones and traffic cops directing at intersections. Apple Maps just herp-derped me through that. The only difference in the display was the estimated arrival time slowly ticking upward as I crawled.

On exactly one occasion I had Apple Maps present me with a yellow bar across the top, and Siri’s voice notified me that there was a delay due to an accident. (No alternate routing was provided on this occasion.) Waze has a leg up on Apple and Google when it comes to accident notifications. You even get notified about which lane the accident is in. Google sources some Waze data, but isn’t as specific. On the 101 N last night there was a very sudden slowdown, without warning, at a time of night when there shouldn’t be traffic at all. I waited patiently for Apple Maps to let me know what it was, and Apple Maps was oblivious to it. There was apparently a car accident that closed two lanes, and the car was being loaded on to a flatbed truck, so it wasn’t recent. Why Apple Maps kept silent about it, I don’t know.

Google Maps, in contrast, indicates traffic in several places. When cars are moving slowly, the road is red, as well as the estimated travel time. It serves as an appropriate cue that Google knows the street I’m on is slow, and thus I trust that Google is correctly monitoring the flow of cars. When Apple Maps has an unchanging interface, and an estimated arrival time that keeps ticking up, I have no sense that they understand the clusterfuck I’m stuck in. It’s not that Google Maps clears traffic, but it reassures you that it knows of it.

Google also provides alternate route suggestions as traffic conditions change. Sometimes there’s a prompt for a different route that will save X number of minutes. If I don’t accept the change, then I putter along on the current route. Far more often, I see the little gray paths with estimates of how many minutes faster, or slower, the route is.

I must ding Google for those little gray routes though. Often times the minute-by-minute fluctuations of traffic change on those paths so they are not always improvements. Also, Google’s app will occasionally lay several of the route suggestions on top of each other. For instance, you might be on Melrose, and the original path is to make a left at Highland, you see a suggestion to stay on Melrose and it will be 3 minutes faster. You tap it, and get a slower route. Zoom out and you’ll see that there were two routes it was suggesting that coincided at Melrose, so you got a route, but not the fast one you thought you were seeing. Why this behavior has persisted is beyond me, but it’s been there since they put these branching suggestions in.


Lane guidance is a feature present in Google Maps, but not found in Apple Maps. I find it invaluable when I am traveling in a congested area and unfamiliar with where turn lanes, or exits, will split and join. Some exit lanes might quickly expand in to three lanes with turns in different directions, and Google Maps will tell you which ones you can be in, or even that you will be fine in the lane you’re already in. It’s a comfort, but not a requirement. If you miss a turn, or can’t get to an exit lane, then any map app will reroute you. It’s just nice to reduce the rerouting.

It’s not flawless though, and Apple could learn from Google’s mistakes if they ever implement this feature. Google notifies you of the lane arrangement as it will be when you need to change lanes, not in terms of the lane arrangement you’re currently in.

An irritating example of this is traveling through Downtown LA. The Google Maps issue is that the number of lanes on the 110, heading toward Santa Monica, changes rapidly. You’ll be notified to get in the exit lane to merge from the 110 to the 10 around the 7th avenue exit. If you immediately maneuver to the lane, in the freeway as you currently see it, then you’re in the wrong spot. Two more lanes will merge on to the right of you before you get off, and Google Maps was including those lanes in the guidance it gave when they weren’t there. Ugh.

It took me a bit to wrap my head around the way this guidance gets delivered, but I still prefer it to Apple Maps, which cheerfully asks you to exit right, sometimes with little warning.

Apple Maps also seems blithely unaware of where special turn lanes start and stop, unlike Google. On a major roadway, it is not unusual to have a left turn lane start a significant distance from an intersection, and feature a solid, white line to deter people from making last minute lane changes. When making a turn from Santa Monica Blvd. to Beverly Glen Blvd., Apple Maps verbally alerted me to make a left after it was no longer legal to do so.

I have it on good authority that the vans Apple is driving around have lidar, and that lidar can be used for figuring out lanes. The more light will bounce off the reflective paint than off of anything else, even reflective cars. They might just be throwing away all that data and using the lidar to make really neat, really useless flyovers, but I hope they are using it to determine lanes.

Until We Cross Paths Again

I’m not even remotely on the fence about this decision. I am very disappointed because it would be the most convenient, OS-integrated, privacy-focused application for me to use. I simply value efficiency too much to rely on it in high-traffic LA. I hope it continues to improve, and that they continue to build in features that help it better estimate, and communicate, road conditions, and provide me with an interface that demonstrates that.

2015-10-05 00:45:00

Category: text

Your Apple Music Trial Membership is Almost Up

That was the headline of an email that Apple sent to me yesterday. I checked around with a few people and Apple doesn’t send a similar reminder to people to remind them that they have auto-renew on. Not that it’s the worst thing in the world, but I got a little chuckle out of it.

I turned off auto-renew back when I discovered the ways in which iCloud Music Library was broken for me. I figured I would see how things went with updates and the Radar ticket that was open.

It’s September 30th and while several updates to iOS have been released (and god knows how they version whatever’s on the server backend) I still don’t have a functional iCloud Music Library. My Queen albums are still jacked, and my playlists created before joining Apple Music still spontaneously — and simultaneously — combust. They did close my Radar ticket as a duplicate, so I suppose that counts for something.

Someone might think that it’s sort of silly to obsess over a few broken things. I have access to all the Queen catalog in Apple Music, even if it isn’t ordered and rated the way I had it. I could rebuild all my playlists, manually, as totally new ones. It isn’t impossible to do these things, I’d probably loose many hours, but it would probably work. Probably being a key word there.

There are all the other problems I was having with the interface, Connect, For You, Beats One, and discoverability. So it’s not like it was perfect for me.

Since I never spent $9.99 a month on music to begin with (monthly average is several dollars lower), and it basically doesn’t make my life any better, then it makes no sense for me.

The part that does make me a little annoyed is that I’m not sure how things are going to shape up for non-Apple-Music subscribers in the future. When major iOS revisions come out, are the engineers even going to check and make sure non-iCloud music syncing works? Will they make sure widgets draw correctly? Will purchasing music in the iTunes Store iOS app get buggy and weird (I mean, worse than now, obvi.) I may be back to using Apple Music if the balance of frustration, and neglect, tips the other way. It’s not like I’m switching to Android.

Your Music may vary.

2015-09-30 08:15:00

Category: text

The Apple TV Countdown

I am on vacation and haven’t been writing about all the TV stuff I would ordinarily have been. I have kept an eye on Twitter, and read an article here and there. Apologies for not listening to all the podcasts, and reading all the things, but there are a couple words I would like to arrange before the event.


The most surprising part of this is the rumored focus on gaming. I had guessed early on that TVKit would more likely be for creating media apps than for games (this still might be the case, and GameKit could be expanded to include the TV functions, but that’s splitting hairs). The noise makes it seem very likely that there will be several games demos. Mea culpa.

There is some strange trepidation in the tweets I’ve read from people who are serious about games, along with some outright denial, and defensive posturing, over whether or not Apple “can” and “should” do anything games related. I have to assume these reactions are mostly tied to fears gamers have over people that aren’t serious about games entering the game space. Whether that’s Apple as a company, or common folk buying an Apple TV to play games. I’m just going to outright dismiss those concerns. Almost nothing Apple demos on stage, game wise, ever really turns into anything huge. Developers usually find fun games to make much later. Infinity Blade, and cherry blossoms, and fish, etc. Humorously, Apple occasionally ropes EA into showing off some garbage. I assume they feel it helps lend credibility to their game demos.

None of that has ever inhibited people from playing games on iOS devices, or served as much of a prediction about how games would work even a few months after hardware and software releases.

People coming down hard on whether or not the Apple TV will work as a games console should read this terrible piece in Variety where the writer discounts Apple’s abilities because there is no OTT service to prove how serious Apple is about content. That kind of writing is mostly how all the gamers writing about Apple read to me. (That might be an extreme example since that Variety piece is just so bad.)

It’s worth circling back to the video content discussion, and that Variety piece, because it’s worth highlighting how myopic it is.

Launching a box without a new content service offering doesn’t surprise me at all, given that I’ve been arguing that for months. (Seriously, Tim, get in touch.)

I still expect third party apps for content streaming services to be demoed at the event. Perhaps a mention of HBO subscriptions? It is inevitable that someone will demonstrate something sports related. MLB runs many of the streaming services for other companies (including non-sports HBO, as well as other sports like the NFL). I don’t much care for sports, but they are undeniably a significant force, especially when it comes to adopting new technology. So while there’s no OTT with ESPN bundled with local broadcast affiliates, there will likely be something.

I don’t know if Apple will demonstrate anything for international customers at the event as I’m not sure how global the initial device launch is.


Why launch without every little thing? Why not wait forever for OTT? Because selling devices that can use an OTT service later makes for great leverage in the neverending negotiations. The networks and studios have no real deadline to adopt Apple’s terms. No urgency. They will continue their slow, downward spiral because it still seems like the most stable option to them.

Chicken and egg. There aren’t enough Apple TV customers to make Apple’s OTT terms worthwhile, and you can’t sell Apple TV’s that only offer non-existent OTT as a selling feature. You offer gaming and other media apps, sell the Apple TVs, then you have enough customers to make OTT worthwhile.

4K Video

I had already guessed there would be no 4K video way back when because there simply isn’t the inventory of available UHD remasters, and studios would likely demand increased prices. I do expect an eventual shift in the iTunes Store, just not now.

The next iPhone recording “4K” video presents an awkward little dance since the TV won’t have 4K playback, and nothing Apple makes actually has 1:1 pixels with UHD. Even the theoretical 4K iMac would scale the video up. All other models, beside the 4K and 5K iMacs will scale the video down. Including home movies played back on the new Apple TV.


I appear to also be wrong about Siri. Many moons ago, Dan Moren wrote a piece for Six Colors about his wish for Siri on the Apple TV. I was, of course, uncharacteristically pessimistic about Siri, and wrote how I’d rather have a remote with Touch ID. Guess I owe Dan Moren a drink or something? Or pistols at dawn? I can never remember which.

The event, which will transpire before anyone probably reads this, will surely be interesting.

Event Space

Did it occur to anyone that the space is so big because offering hands-on TV demos takes a lot of room? Especially TV demos with Siri which necessitate some level of noise control? The place is probably full of little rooms with TVs.

I wonder what TV panel Apple will have on display? I doubt it will be Samsung. It would be pretty funny if they went through a lot of trouble to obscure the manufacturers with some black tape. Hehe.

2015-09-09 02:00:00

Category: text

Media Stocks Tank After Analyst Says TV Business is Broken ►

Mathew Ingram writing for Fortune:

Disney (DIS -1.18%) alone lost 6% of its value, ending at its lowest level in six months, and has now lost more than $30 billion in market cap in a little over two weeks. Time Warner (TWX -1.62%) was also down about 5%, to its lowest level in 2015, and 21st Century Fox (FOX -2.65%) was down a little over 4%. CBS (CBS -2.04%) and Discovery Communications (DISCA -0.81%) were both down by about 5%, and Viacom (VIA -0.95%) dropped by more than 6%.

The stocks recovered a little bit today but they’re all still down.

The analyst comment that set this all off:

“The market is now valuing U.S. ad-supported TV businesses as structurally impaired assets,” Juenger said. “We believe this is fair and warranted, because: a) we believe TV advertising is undeniably in secular decline; and b) affiliate fees are now also being put at increased risk. When an industry is undergoing a massive structural upheaval, one major revenue stream is already impaired — and now there are signs the second one may be as well — investors won’t wait for final conclusive evidence to reevaluate how much they are willing to pay for the existing status quo cash flow streams.”

In plain English: ad sales are going down, and fees collected from satellite and cable subscribers are declining. It really isn’t so jarring if you’ve been paying attention to media reporting. The media reporters just usually frame it as slight downward trends. Wall Street frames non-growth as death. Those guys are so fun.

As Mathew notes, Netflix and Google are also down. He speculates that “The Market” has taken it’s anger out on all media. Those guys should sacrifice a small animal, or something.

2015-08-21 22:15:00

Category: text

Studios Gamble on Untested Directors for Big Movies With Mixed Results ►

Josh Rottenberg wrote a piece for the Los Angeles Times comparing Colin Trevorrow to Josh Trank. The first few paragraphs make it read like it really is about the two of them, but then the piece goes in a more interesting direction and speculates about why studios want someone inexperienced for these big tent-pole productions.

Both directors were caught up in a trend that has gathered steam in recent years, as studios have been increasingly looking to untested directors to helm high-stakes tent-pole movies. Most recently, in June Sony Pictures and Marvel Studios hired Jon Watts to take over the “Spider-Man” franchise on the strength of his minimalist thriller “Cop Car,” which Watts shot in his rural Colorado hometown for just $800,000.

Sony did something similar already, when they hired Marc Webb to direct the Amazing Spider-Man reboot, and brought him back for the sequel. Sort of a mixed bag there. If you scroll down to the bottom of the LAT piece there’s a “Nine young directors who’ve made the leap from small films to blockbuster projects” list that even highlights Marc Webb in spite of the text above discussing the new fresh-face being brought in for the Spider-Man franchise.

So many factors but one that doesn’t typically get brought up is the fact that many films are made after the footage has been shot. The old joke “we’ll fix it in post” is something everyone’s heard before.

“The studio executives and marketers want to control the movie so badly, they don’t want a visionary director,” says one high-ranking talent agent. “They want to basically make the movie themselves. So much of it is made in CGI now anyway that you can fix it if it’s messed up, so they can get away with a lot more mistakes. And they don’t really care about deep performances from the actors — that’s not really what they’re looking for.”

I can’t speak to this from any tentpoles I’ve personally worked on, but it’s not unreasonable to assume that this is a possible explanation for entrusting unknown directors. Re-editing sequences, flopping plates, stitching two plates together, omits, reshoots, and completely animated shots that can be tweaked until 1 month before release.

Colin notes that he didn’t experience that on Jurassic World but Trank, in his deleted tweet, does lay the blame at the feet of studio meddling. It’s possible the executives at one studio don’t intervene like they do at another, or that they only step in if they (the suits) perceive a problem. (Whether it’s warranted or not.)

As an audience member, I frequently wonder how a studio went along with a director’s impulses, but I also condemn a studio interferring with the artistic intent and making a movie by committee. It’s kind of hard to reconcile these opposing views.

The article even touches on gender for a bit. Noting that inexperienced men are getting these offers, and there doesn’t seem to be the same happening for women. Colin chimed in with a theory that the many women are turning down the opportunities offered to them — LAT highlights Ava DuVernay turning down Black Panther. I’m not sure that I would really focus on her turning that down as an example that women just don’t want these jobs.

In any event, it’s worth thinking about what’s in Rottenberg’s article.

2015-08-20 01:00:00

Category: text

How's Your SIGGRAPH?

This past week was SIGGRAPH, a yearly event held in different cities. The last LA one was in 2012, so it’s been a while since I’ve been.

I half-jokingly suggested to Dan that would we meet up at SIGGRAPH and tour it like other podcasters do with CES or WWDC. One of our favorite podcasts (we’re podcast fans too) had a comedy bit poking fun at people constantly asking other people, “How’s your CES?”

Then, before I knew it, we had plans in place and Dan was coming to LA, I was taking a day off work, and we were making silly jokes.

It was such a busy week I haven’t even had the chance to reflect on it until now.

Tuesday, I had to drop off my car (some jerk hit it when it was parked) and pick up a rental, work a day, drive to downtown (turns out there was a Dodgers game causing traffic!), go to a Ringling College of Art and Design event, stop by the bar Dan and I selected for the meet up, and then meet Dan in person in Little Tokyo. I found out at the bar that they had a membership policy, so I got to worry about that, and I had a nasty aftertaste from a margarita and a mojito, so I took a little travel-sized bottle of Listerine I had with me to meet Dan. I spit the mouthwash into a planter just in time to turn to Dan waving to me. I’m not sure how the day could have gone any smoother.

The next day, I met Dan again, picking him up from the LA Hotel (weird name, right?) I put Dan in charge of getting us to the downtown Blue Bottle Coffee (formerly Handsome Coffee). This didn’t work out because Dan’s phone thought he was in Arizona still. Good thing we were in Downtown Los Angeles where the streets are so easy breezy. Ha. We got there, got our coffee and headed back to The Los Angeles Convention Center.

The LACC is a sprawling complex of buildings, with various halls and parking garages. The LA Auto Show uses up the whole thing and there’s frequently plenty of people parking in the private lots around the center. Not for SIGGRAPH this year. Everything was closed up except one garage. Hardly any foot traffic around the building. If you were passing by the buildings, you would assume it was closed.

Making our way to pick up or passes was also strange. In years past, the registration has been in rooms, or in the lobby, downstairs. This year they crammed it next to the “Art Gallery” section. Dan and I had “Exhibition Only” passes which didn’t include the brightly colored, VR-tastic area so we walked back to the main show floor.

So small. Not only were there fewer companies on the floor, but each company dialed back their elaborate booths. Areas had little stands with cloth curtains to sort of shrink in the space (so it didn’t look like a void with a few stray booths). It was pretty depressing and it took hardly any time to walk the show floor to survey what was on offer. Some booths were just a table, others had tables and some demo stations of different products, like Image Metrics which would track bloody wounds, or makeup, on to your face in real time.

A few booths had space for presentations, with some chairs or benches, and large screens. Dan and I witnessed a few of the presentations, but it was all fairly auto-piloty, with slideshows, or sped up movies of workflows.

The Foundry hosted some nice ones for Mari with two texture painters from a video game studio, and another with a presentation from Tippett Studios about how they used Katana and Nuke to quickly execute a sequel project in half the time as the original. (Videos of the booth presentations are available on The Foundry’s site, but it does require creating a login to view them.)

There were some presentations in little rooms upstairs, but the schedules weren’t posted anywhere Dan and I noticed until we wandered up there. By then it was mostly for topics we did not have an enormous interest in.

Even though it was Wednesday, of a week long convention, it seemed to be winding down. Most major things seemed to have happened Monday or Tuesday. I certainly wouldn’t book a week to attend, unless I was some big head-honcho. A Renderman “Art and Science Fair” was scheduled for that night, but it ran for several hours and would have consumed the limited time Dan and I had (besides, neither of us use Renderman these days). Renderman did seem to be the biggest draw, but mostly because people are interested in Pixar (the line for the walking teapots was so long).

We went back to Dan’s hotel, recorded half of a podcast episode wrong, and then half of a podcast episode right. Listen to Episode 59 here.

We grabbed some dinner at a pretty lackluster restaurant (rounding out a full day of pretty unexceptional dining) and finally sorted out how to get people in to the membership-required bar (Dan and I are both members of a rum bar now.) One of the podcast listeners that came to the event even went to SIGGRAPH, but I continue to be fascinated by the listeners we have that enjoy the show regardless of all the inside-baseball stuff about VFX nerdery. Very thankful for all the listeners, even those that could not make it.

Reflecting on the whole thing, I came away with a pretty negative impression of SIGGRAPH 2015. It doesn’t seem to service artists a whole lot, and seems like more of a corporate networking event. Even the job fair section shriveled up and tumbleweeds blew through it. Although Imageworks had a big booth, they were hiring for Vancouver, which still hurts. I wish the people I know there well, but I’ll never be able to work there again. Seemingly none of the other companies were all that interested in LA either. Dan got a free mint though.

Incongruously, there is a ton of cool stuff that comes out of SIGGRAPH. Papers, presentations, software, etc. It mostly affects you if you’re lucky enough to work at a company that can take advantage of these advancements. Or even companies that have R&D budgets. I encourage everyone, regardless of their chosen discipline to check out the work. Stephen Hill from Ubisoft in Montreal is collecting links and putting them up on his blog.

How about a demo of OTOY’s path-tracing, physically-based renderer that streams right to your desktop browser? There’s even a real-time subsurface scattering demo that works in WebGL in your friggin’ phone’s browser.

Seriously, go look through all the amazing stuff people make that you won’t see on tech news sites.

However, if you would prefer to digest this through a news site, I would recommend fxguide, which had a number of people covering it in great detail. Day one, day two, and day three.

The industry I work in sure has changed a lot in the three years since I last attended one in LA, and I was confused about why they even bothered to have it in LA at all this year, let alone Anaheim next year, and LA again the year after that. Sony Pictures Imageworks’ move to North made Vancouver the largest concentration of VFX workers. Sure, there are small places, like my current employer, as well as Disney Animation and DreamWorks, but it hardly seems like a thriving community with high morale. Video games seems to supplement some of those motion picture losses. But they mostly seek out engineers, not artists. Same for VR.

Would I attend another? Sure. Who knows, maybe things will turn around for people in my particular position. Barring that, I’ll at least be able to document it’s steady decline. Yay.

At least I have the podcast with Dan, people that enjoy it, and some neat projects to look at.

2015-08-16 23:52:00

Category: text

Supporting Independents

One of the things that I’ve started to question about Apple’s media strategy is how they approach the big power brokers and aren’t fostering a new, independent wave of content creators.

A fascinating Twitter account to follow is @YTCreators. Little tidbits of info surface throughout the day pointing to pages explaining aspect ratios, or to accounts showing videos on low-budget video production, and especially announcements about updates to the app rolling out. YouTube wants people to make YouTube videos, not just videos. There’s a whole experience they want to continue to grow.

A YouTube channel about making YouTube videos is also a natural fit. Videos range from educational to inspirational — like this video with Hannah Hart.

YouTube provides space to work, and collaborate in several major cities for channels with at least 10,000 subscribers. It’s so popular the summer signup period is full, and people have to check back after September 1st. They even offer classes, but they’re full, and they require 500 subscribers to qualify. (Which seems pretty weird for classes on getting started.)

Apple doesn’t have an education program like this. You can sign up for iMovie classes at a local Apple Store, or you can search [iTunes U for a class] that will cover the skills.

People don’t need courses, or fancy studios, or very specific software instructions in order to make video, or upload it to the internet. Creating a focused community around a type of media, and supporting that work, will foster the specific kind of content a company would like. That sounds so cynical, and unartistic, but only when viewed from very far away.


Apple’s most recent investment in social media, and video, is the Connect tab in iTunes Music. It’s not great. It’s limited to musicians right now, but if it was expanded to cover a wider array of media, like YouTube does, it would still seem pretty anemic. Marko Savic speculated about it as a possibility on an episode of Unhelpful Suggestions.

Anyone can start a YouTube channel and upload video. Connect currently requires an artist to be selling music through iTunes, which makes sense since it’s about promoting material on iTunes, but if there was Connect for TV and film, it’s worth checking out what it takes to sell a TV show (most analogous to a vlog series):

The requirements to work directly with Apple are listed below. If you do not meet all of these requirements, you can work with an Apple-approved aggregator instead. Aggregators are third parties that can help you meet technical requirements, deliver and manage your content, and assist with marketing efforts.

Technical Requirements: iTunes does not accept content in physical formats like VHS, DVD, etc. You must deliver your content as a digital file through one of the Apple-approved encoding houses. Be sure to compare their services and fee structures, as this will be a separate cost if you work directly with Apple. Appropriate file storage capability and bandwidth is also required. Alternatively, you may work with an Apple-approved aggregator instead. All video content must be stored in Beta SP format or higher. iTunes only sells video content that is DVD-quality, so the quality of your source must be significantly higher than a standard definition DVD. Content Requirements: At least 50 hours of network-aired TV content Digital distribution rights for all content you intend to sell on the iTunes Store All associated music and talent rights cleared for digital distribution Financial Requirements: A U.S. Tax ID A valid iTunes Store account, with a credit card on file * Apple does not pay partners until they meet payment requirements and earning thresholds in each territory. You should consider this before applying to work directly with Apple as you may receive payments faster by working with an Apple-approved aggregator.

Note: Meeting these requirements and submitting an application does not guarantee that Apple will work directly with you. You may still be referred to an Apple-approved aggregator.

So just get a network TV show, use an approved encoding house, or figure out the aggregators, and you’re all set! Easy peasy!

Compare that with signing up for YouTube uploading a video.

Indeed, even signing up to distribute a podcast (and dealing with XML validation!) seems far more manageable. What if there was a Connect for Podcasts? Well… they’d need to figure out some kind of revenue stream to compete with YouTube, because ‘free’ wouldn’t cut it.

Oddly enough, artists can upload videos, and audio, as part of Connect posts that aren’t part of the iTunes store. Unfortunately, you can’t go back and find those things later because they don’t appear in search results, only from scrolling through the Connect stream.

As for growing a “brand” — It’s a one-way broadcast tool. There are comments, but they’re tucked away. They present in a chronological list, and there’s no pinning, emphasis, threading, or promotion that can be applied. It’s like traveling back in time to 2000. Looking at the comments is just sad.

Connect is clearly for the already established to announce things. They might as well turn off the comment feature.

What Ever Happened to Hollywood?

I mentioned “the established” but it’s not just referring to the already successful stars, directors, and musicians of the world, but the studio systems that support them. Indeed, much of that media might is held by a small group in Los Angeles. They’re pretty old, and not particularly in touch with the youth of today. They control content deals, and they’re the reason why “progress” gets tied up for an eternity.

Apple wants the established media, instead of fostering independent creation, and the established media doesn’t want to give an inch. They might be less, and less powerful every year, but they can still cling to it to prevent a disruption in the business of buying (leasing), and collecting (please lease it again when there’s a new format).

It seems practical for Apple to invest in independent content creators, like YouTube has, and continues to do. They might find themselves in the unenviable situation of only needing Hollywood.

“You aren’t ever gonna sell this house, and you aren’t ever gonna leave it, neither.”

2015-08-12 09:00:00

Category: text

It's Practical and Digital, Not Practical vs. Digital

The Verge published a piece on August 4th titled “2015 is the year of Hollywood’s practical effects comeback“, but I mercifully remained ignorant of its existence for four days. This is another long, rambling post about how “practical effects” are good, and “CGI” is bad. You know, the kind of article that sounds very appealing on the surface because it talks about how much better things used to be, and how bad things are now. I am particularly irked that these poorly reasoned opinion pieces get broadcast to large audiences. It’s one thing if someone wants to tweet this, it’s another thing if a journalist uses his platform to broadcast things that aren’t accurate. It makes the discourse worse when that happens.

The reason it happens is because writers don’t know enough about VFX, which makes that black box an easy target. It’s whatever’s in the mystery box that made the movie bad.

The same day Kwame published his piece, Freddie Wong put out a great video that undercuts these sort of arguments. The timing is coincidental, but the subject is the same. Freddie’s video is a great way to demonstrate the flaws with “CG Sucks”. It’s not without flaws, but it’s my number one choice to refer people to.

Ben Kuchera wrote a small bit in favor of Freddie Wong’s short video for another Vox-owned site, Polygon:

The computer is a tool, and some folks know how to use it well while others don’t. It doesn’t make the tool bad when it’s used gracelessly, and we have to improve the conversation about how special effects are used in modern media … especially when we don’t even know they’re there.

Caroline Franke also endorses Freddie’s video, and agrees with him, over at the main Vox site. Curiously, she elected to embed a video from Todd VanDerWerff’s disastrous piece in favor of practical effects. The one where Todd had to print a correction that he was using an E.T. with a computer generated face (he couldn’t tell).

Back to this Verge piece:

I’m going to go through Kwame’s opinion piece and break it down. It’s not the nicest way for me to spend my time, but I don’t want to leave any lingering doubts that this sort of film critique isn’t helpful, and it’s damaging to the public perception of what I do for a living.

2015 is the year of Hollywood’s practical effects comeback

The biggest set piece in Mission: Impossible — Rogue Nation is also its first scene. We’ve all seen it in the trailers: a frantic but determined Ethan Hunt (Tom Cruise) clutches the side of an Airbus A400 for dear life as it takes off into the stratosphere. While the scene itself is only tangentially related to the overall plot of the movie, Paramount made sure this was the scene that got people into theaters. A large part of this strategy was broadly publicizing the fact that it wasn’t faked. No CGI was used. No expense was spared. Tom Cruise was really and truly strapped to the side of that plane.

Here we see the first problem. If Paramount had not promoted this as being a real stunt, then no one would know it was real or if computers were used to augment reality.

Indeed, computers were used to augment this very scene and remove the wires used to safeguard Tom Cruise’s life. Wire removal is still a visual effect, and it’s not a flashy one because you’re not supposed to see it. It is an invisible effect.

The Mission: Impossible franchise decided long ago to place its bets on over-the-top stunt work — Cruise famously scaled an actual section of the Burj Khalifa in Dubai for Mission Impossible: Ghost Protocol in 2011, for example. But in 2015, practical effects and stunts aren’t exceptions to special effects rules. As some of the biggest movies of the year — namely Rogue Nation, Mad Max: Fury Road, and the upcoming Star Wars: The Force Awakens — rely more and more on real-life actors for their action scenes, we might be seeing the start of a shift away from CGI as practical effects become a bankable alternative.

First of all, he is citing a film no one has seen as an example of practical effects being used well. Secondly, he speculates about real-life actors being used for their action scenes more often. That has nothing to do with practical effects. In the old days, stars would perform their own stunts, on occasion, if it could be safely executed because there was no technology to do face replacement for stunt doubles. Sometimes, you’d just see a stunt double! Digitally replacing someone’s face, or putting an explosion behind them, isn’t inherently worse. If Paramount has not told everyone Tom was really on that plane then no one would have known because visual effects artists can believably pull that off these days. You could say that it would totally fly under the radar.

As far as “bankable” goes? Marketing select stunts as practical might be novel, but it’s not a distinguishing feature of the film as seen on the screen.

There’s certainly no question that CGI can take fantasies and make them seem like reality on the big screen. Recent successes like Furious 7 and Avengers: Age of Ultron wouldn’t be possible without computers allowing for flying suits of armor and cars flying out of buildings. But after more than a decade of high-octane CG theatrics from huge box office juggernauts like Transformers, Harry Potter, Avatar, Star Wars, Star Trek, Terminator, literally anything the Wachowskis make, and every Marvel and DC tentpole, audiences might be getting fatigued of digital models exploding into countless pixels. As Variety TV columnist Brian Lowry put it after seeing Age of Ultron, CGI can now prove “more numbing than exciting, even during what should be the show-stopping sequences.”

Kwame, and Brian incorrectly blame a writer, or director’s injudicious use of a tool to mean that the tool itself is flawed. Killing a large number of nameless, meaningless things – whether digital or practical – will always be hollow regardless of the means used to execute the effect on screen.

Groot was a digital character in Guardians of the Galaxy and people loved him. They felt bad when bad things happened to him. In the same film, there are waves and waves of people dying and it means very little. If a computer didn’t touch those scenes it would read the same, emotionally, it would just cost a ton of money to manufacture. If any journalists would like to spring into action and second-guess the things Hollywood spends money on, go for it, but that’s not this argument.

Hollywood’s reliance on CG has only intensified. In the 1970s and ‘80s, movies like Westworld and Tron made use of rudimentary computer graphics to dazzle audiences who’d never seen such worlds on the big screen.

Really? That was the perfect execution of computer graphics in film? Westworld and Tron? They should have just held steady there?

Fast forward to 2014, and we got Transformers: Age of Extinction, a movie full of so much visual noise that it was hard to tell what was even going on.

Again, this wouldn’t read as a better experience with stop-motion robots. Maybe, just maybe, Transformers: Age of Extinction might have issues with story and direction?

As computers have gotten more powerful, studios have used them to create bigger spectacles. Bigger spectacles translate to bigger box office returns; according to Box Office Mojo, six of the top 10 highest grossing films of all time were CGI-fueled summer epics that came out in just the last five years. Three came out this year alone. Raising the stakes for what what we expect from our popcorn fare inevitably means upping the visual ante. That’s not necessarily a bad thing, so long as it’s done well. But in overindulging what it thinks is our bottomless appetite for bigger, more bombastic movies, Hollywood might be battering our senses to the point of dullness.

Huge factual errors here because it assumes the films are successful because they used digital effects. Even the most mundane projects use digital effects for set extensions, a couple sky replacements, makeup fixes, wire removal, painting out camera reflections – Lots of stuff. It is a part of filmmaking.

Also, if spectacle was inherently successful then all expensive, VFX-driven films would be successful. That is not the case. Even Disney, which has some of the biggest successes, with VFX out the wazoo have had very expensive flops. Sadly, Tomorrowland was not well received this year, and that was “done well”. It had nothing to do with “dulled senses”. Pixels, and Fantastic Four are loaded with effects but didn’t perform as well as other VFX heavy productions.

Spectacle, even if it’s executed well, isn’t going to guarantee the movie is even a financial success. Regardless of it being physical or digital.

As Matthew Zoller Seitz wrote for RogerEbert.com last year:

Despite their fleeting moments of specialness, “The Avengers,” the “Iron Man” and “Thor” and “Captain America” films, the new “Spider-Man” series and “Man of Steel” treat viewers not to variations of the same situations (which is fine and dandy; every zombie film has zombies, and ninety percent of all westerns end in gunfights) but to variations of the same situations that feel as though they were designed, choreographed, shot, edited and composited by the same second units and special effects houses, using the same software, under the same conditions. As long as people are talking, there’s a chance the movies will be good. When the action starts, the films become less special.

In other words, all this is expected, and the miracles that cinema pulled off 30 years ago — the moments when audiences felt transported to the directors’ dreamscapes — now feel rote.

This has nothing to do with using a computer to make images on a screen. This has to do with the images that get approved to go on that screen at the whims of the director. It has nothing to do with specialness of the tool.

It should really be pointed out that people make computer generated effects. A computer, by itself, generates nothing. If that were the case, your home PC would be pumping out Pixar classics while you browse the web for new shoes.

Just as people made miniature models to blow up, or painted matte paintings, or drew lightning by hand. People have to make it happen.

But in recent years, there’s been an attitude shift bubbling up among some of Hollywood’s biggest-budget filmmakers. In a recent interview on The Tonight Show Starring Jimmy Fallon, Rogue Nation co-star Simon Pegg talks about the merits of dangerous stunts over CGI:

“These days,” he says, “CG is an amazing tool, and we love it and it enables us to do amazing things. But when you see something which is digital, there’s a slight sense of disconnect. You know it’s not real. Tom taped himself to the side of a plane for real! That’s how much he cares about you!”

What Simon Pegg is describing is when he knows something isn’t real. That means the effect didn’t work out. You can also know that animatronics, stop motion, matte paintings, and optical lightning are in a movie and they’re not “real”. Go watch Arnold take a table-tennis-ball-sized tracking device out through his nostril in Total Recall – or basically any effect in that movie. Tell me how real it feels.

There actually is a visceral sense of danger and even wonder as you’re forced to acknowledge that a human being is risking their life for a film, much in the same way that there’s a greater feeling of connection to a person in makeup over her CG counterpart.

Only if you know they did. The goal is that you can’t tell whether or not they did. It is so very easy to highlight effects that did not work, but it is hard for audiences to perceive the ones that did. Freddie Wong’s video highlights a couple examples that are worth considering, but if you watch enough behind-the-scenes videos you’ll see plenty of other invisible effects.

That’s certainly true for Mad Max: Fury Road, whose promotional push made much of the fact that it was shot in the Namibian desert with real cars, real explosions, and a real flamethrowing guitar, as if to remind people that things like that could still be pulled off in real life. Of course, director George Miller also used plenty of digital effects to push his scenes over the top. Of the film’s 2,400 shots, 2,000 of them were VFX shots. But set pieces that might have been done purely by computer in other movies were choreographed in real life, making for some beautiful but incredibly dangerous scenes.

This is where Kwame should have realized his whole argument against the pervasive use of computer graphics made no sense. 83% of a film. No big deal! Not to mention the color grading (digital), the editing and retimes (digital). That is not to belittle the importance of the work performed on scene, but to highlight how this had nothing to do with computer graphics being bad. VFX shots aren’t cilantro.

And it’s especially true for Star Wars: The Force Awakens, which hits theaters later this year. Director J.J. Abrams has continually and consistently paid deference to the practical ingenuity that made the original trilogy so great. So in addition to the return of the original starring cast, we’ve also been promised a return of practical effects. The new X-Wing? Real. BB-8? Real. The Millennium Falcon? So real as to injure Harrison Ford on set. These decisions are billed as a return to form, as a chance to go back to the way things are supposed to be.

Not to beat a dead horse, but the movie isn’t even out yet, the marketing push is. Also that list is wrong, because the Millennium Falcon as a set piece is real, but that ship you see flying around sure as shit isn’t. Those “X-Wings” are real set pieces but they aren’t models over that water.

Don’t give me this “real” stuff about a space movie in a galaxy long, long ago that was part of a $4 billion sale to a media conglomerate. It’s about illusion. It’s great that a person might think it’s real, but it’s about the suspension of disbelief, not physical manufacturing. Physically manufactured elements can help, but it’s not like anyone believed the puppet Yoda in The Phantom Menace was real, in spite of it being a physically manufactured puppet. (Except for the walking part.)

And that’s likely the whole point — that striving for verisimilitude today means moving away from the CG that’s an industry standard and reminding audiences of how directors like Spielberg and Lucas did it way back when.

No, no it isn’t the point at all. Film is not a documentary process. Truth is belief, not reality. Use the writing, acting, makeup, costumes, stunts, sets, locations, color grading, editing, special effects, and visual effects that make the audience get swept away in the story.

It’s clear that, at a time when so many of today’s movies are reboots or returns to older properties, studios are trying hard to mine for what made people feel so good about going to the movies in the first place. Directors like Abrams and actors like Tom Cruise seem nostalgic for a time when connecting to magical objects, spaceships from far-off galaxies, and actual peril meant relying on props, makeup, wires, and daring. They’re both saying that today’s CG landscape can’t pull that off because we take computerized effects for granted.

No, I think they did it this way because they thought it would work for the movies they were making. Again, it bears repeating that the Star Wars movie has not been seen by Kwame. He’s immediately lauding it for practical effects.

That doesn’t mean that practical effects are inherently better or that CGI shouldn’t ever be used. It just means that, like music lovers preaching the gospel of vinyl, some directors are pushing back against CGI because practical effects express their ideas about how their particular movies — and maybe movies in general — ought to be made.

What?! A whole slew of words about how computer graphics shouldn’t be used and we come to an analogy about vinyl? Vinyl?!

I anxiously await the print edition of The Verge on my local newsstand, because paper is an inherently better medium.

At the end of the day, though, Mission: Impossible — Rogue Nation is just a popcorn flick, diverting but ultimately empty.

“Anyway, I think practical effects make movies good but this movie wasn’t good, so oh well.”

Special effects can never replace the connection created by an excellent story that keeps you invested from start to finish. But you do feel something during those stunts, an elevated kind of thrill knowing Tom Cruise really is on that plane or on that motorcycle, risking his life so that we can have fun for a couple of hours at the movies. It could be argued that Rogue Nation would be no better if the whole thing were done with green screen.

I will gladly argue this. In fact, I have, above. Kwame’s time would have been better spent arguing this as well.

After all, real-life action gets our attention right now primarily because it feels so different.

It doesn’t feel different. It has been marketed as being different. No one A/B tested this movie with a version heavier on computer graphics. There was no Pepsi Challenge.

But with the next few years positively glutted with action movies, “different” might have a leg up on the competition.

In marketing films it might give the project a leg up on competition by virtue of the fact that audiences have been fed a narrative that one kind of illusion is inherently better for them, in all cases, than another kind of illusion.

2015-08-08 23:00:00

Category: text

Vanity Fair: "The VidCon Revolution Isn't Coming. It's Here." ►

Richard Lawson (yes, that same Richard Lawson that posted baseless rumors for Gawker) went to VidCon for Vanity Fair to write about the conference, the fans, and the stars. Also the business of why they are famous. For anyone struggling to wrap their heads around the popularity of YouTubers, Viners and other influencers then this piece is for you.

Like I usually point out, and Richard also points out, it’s very easy to roll your eyes at all this, but then you’re ignoring a significant shift in the way money is changing hands for entertainment.

And with that change comes big dollars for these influencers. After our second meeting, Talavera and Leimgruber followed up with an e-mail that included some hard numbers. What they had to tell me: approximately 200 social-media influencers have earned over $1 million in the past year, and another 550 earned more than $250,000. The NeoReach guys estimate that the number of “Millionaire Influencers” will double next year. Popular YouTubers (1 million-plus followers) can earn as much as $40,000 per video, and $5,000 per Instagram post. That money is coming from sponsorships that pay out $0.05 to $0.10 per YouTube view, or $0.15 to $0.25 per Instagram like. Add on top of that the money made from Google AdSense, and any merchandise sales and appearance fees. In short, these people, and there are many of them, are getting very rich.

With all that money changing hands there are also problems. Richard describes the despair he felt at the parties. Even the concerns that some of the “older” YouTube stars like Grace Helbig and Felicia Day have for these kids thrust into sudden fame and fortune.

I’d also like to add that part of the reason it’s so difficult for non-teens to understand the celebrity of these YouTube stars is because we feel creeped-out by it.

Richard talked about his VidCon experience a little more when he was a guest on the (almost entirely inappropriate, NSFW) Throwing Shade podcast. Specifically when he recaps how Grace Helbig’s opinion on talent.

I read Richard’s piece after Marko Savic had sent me Caroline Moss’ profile on Vine star Logan Paul. A very distilled look at a specific person in this sphere which was also interesting.

Update: I was contacted by Richard Lawson about my description of his past work at Gawker. I used the words “fabricated lies” but he wanted to point out that he never made up anything, just ran rumors. I’ve adjusted the wording to reflect that. I still find repeating baseless rumors of abuse irresponsible. Though his past writing isn’t relevant to the VidCon story, I am still bothered by it.

2015-08-07 08:45:00

Category: text