Unauthoritative Pronouncements

Subscribe About

Defocused 13: 'Racing Bulldozers' with Stephen Hackett

Stephen Hackett, of 512pixels.net and Relay FM fame, was kind enough to be a guest on our silly podcast. He’s a former Apple Genius, and has a love for all things Apple. ‘Pirates of Silicon Valley’ seemed like a natural fit. We love to have guests on that are excited, and passionate about the movie they want to talk about. (Casey Liss and ‘Collateral’, and Myke Hurley and ‘Scott Pilgrim vs. The World’.) Unfortunately, the movie wasn’t quite as we remembered it.

I still have a certain fondness for it, if only because it reminds me of how I felt about Apple at the time I watched the TV movie in 1999. Even the unintentionally comedic moments in the scene are still interesting.

2014-09-10 09:53:40

Category: text

Watch the Cloud

For weeks, Apple teased a special event for the morning of September 9th. Instead of a usual invitation, there was a countdown clock. Apple even redirected their main site to the countdown clock. This was going to be so big. Everyone should pay attention to this. Apple would even be running their own liveblog of the event, in addition to streaming video. They were planning on controlling the message on this incredibly important event. They didn’t want this news to filter down through the press, being shaped by the press’ inevitable cynicism of Apple events.

Unfortunately, everything fell apart. The live streaming was in a constant state of skipping through time, freezing, or flat-out denying access. Even when the live stream was working, their audio was all over the place. In the pre-show, they broadcast music over other music. In the event, they broadcast a translator’s feed over the presenter’s talking, at the same volume. This was, unequivocally, a disaster for managing the story they wanted to tell to customers.

I feel terrible for the people trying to manage the event. There’s no, “give me an hour” on a live show. There’s not the chance to come back and do this tomorrow. There’s a whole auditorium full of press. While I do feel bad, this is hardly the first time this has happened. This was, however, the most severe. It’s worse for Apple than it is for me, here on my couch. This failure means they are not getting what they want from their event — control.

Pick an iPhone — Any iPhone

This was not surprising to anyone paying attention to the rumors, and the conjecture. Even though I did not seek any leaked photos, sites still posted them to Twitter and I saw them. We got the iPhone 6 and the iPhone 6 Plus. They are both larger phones than the preceding iPhones. Last year’s iPhone 5s and 5c will stick around — presumably for people that would like cheaper phones, or smaller phones. People that want the latest-and-greatest, in a small package, are at a distinct disadvantage. I do question how big of a deal this will be.

My identity has not been tied to the things I buy for many years. I’ll buy Guava Goddess Kombucha, and fancy wines, or I’ll buy a grouper sandwich in a Florida dive with a Yuengling. It doesn’t say anything about me. iPhones are just things, there’s such an abundance of similar things that it hardly matters. Sure, when I was a kid, and my mom bought off-brand stuff I would be embarrassed, but there are larger concerns than Apple not specifically crafting an iPhone around what I think I want. Big ones, small ones, whatever. I keep an open mind, because I’m a generous soul.

Looking at the lineup, and my antique iPhone 4, I’m leaning towards the iPhone 6. It will be a huge change for me, but I have the cargo shorts to pull it off. (No really, cargo shorts are amazing. Don’t read this, Matt Alexander.)

The Plus is enticing to me because “optical image stabilization” sounds like a great thing to have. Then I thought about that for 5 seconds and remembered that it’s so tiny, it’s probably not the most effective optical image stabilization there is.

The fitness features honestly mean nothing to me. I know there are people that measure runs, and that this (like all bits-o-fitness) is advertised as something to motivate me. Tim underestimates my laziness. They are theoretically neat, but feel like they’d be more at home on a Samsung phone than an Apple one.

Pay Apple

The tech press got themselves in to a lather over NFC payments again. They do this, from time to time, but this event managed to actually pull off their prediction.

They did a bungling, late-night infomercial to lead in to this and it drove me up the wall. Not a single female presented in this event, but they made sure to show overstuffed purses. “Does this happen to you?” BOING! Cards everywhere.

I sincerely question the implementation of this service. From the demonstration, all you have to do is take a photo of a card. This seems… not very secure to me. Perhaps credit card companies are also verifying that the photographed card is being used on a phone that has the phone number of the cardholder on file? I’m not entirely sure. It does make me wary. They also show that it’s linked in to Passbook. I’ve used Passbook, and it’s kind of a mess. Instead of people looking for cards in their overstuffed purses, they’ll be looking for them by swiping around on a glass slab that has no tactile indications of what you’re touching. Then scanning their fingerprints with the Touch ID scanner, which sometimes gets fingerprints wrong. I can’t wait until I’m in line behind someone that’s not ready with the card they want pulled up on screen. It’ll be like being behind someone with a personal check.

The really strange thing is when they made a big deal about how secure this was, and then they showed an Apple Watch being able to make purchases. That doesn’t have a fingerprint scanner. Is it just authorized to work as long as it’s in proximity to your phone? Then someone would just need to take your phone and your watch. Do you have to use Touch ID every so often to confirm the watch and phone are still in your possession? Because then it kind of defeats the purpose of the watch being a payment method.

There are just a lot of questions. I am not coming down against it, but I do want to see how this works for people in real life. When Passbook was introduced, many people wrote that it was Apple’s answer to NFC. That the NFC sensors didn’t exist, so it just made more sense to use scanned codes. Who really uses Passbook? I am at a loss to think of anyone in my life, but perhaps there’s a very large, dedicated community of users I’m unaware of.

Status Symbol

The Apple Watch, a long-rumored device, finally made its big debut today. People loved it, and people hated it. People are the worst. It does things that are, in a purely technical sense, amazing. However, it does a lot of really weird stuff that seems totally outside the mandate of a teeny-tiny device for my wrist.

I am not sure where the Apple Watch is aimed. It includes a dizzying list of features that would entice any Android smart watch owner to consider it. It requires an iPhone, of course. It’s $350, which is more expensive than many entry-level, on-contract iPhones, but it is a watch.

If you’re a bro with a G-Shock, or hipster with a Seiko, it might seem exorbitant to charge this — a king’s ransom — for that kind of a watch. For those that love Automatic Swiss watches, this is a paltry sum. However, the lovers of those Automatic Swiss watches do not want features, they want exclusive, meticulously crafted jewels. The G-Shock owners want those features, and they don’t care about rubber, plastic, or mass-market, quartz timekeeping.

It is very peculiar that Apple chose to walk the line between the two. There are fussy materials, but not the craftsmanship. It’s cheap as dirt compared to Tissot, but it’s too expensive for Timex. It’s G-Shockingly ugly, and bulbous, but it’s glossy and sleek. Who is this for?

I had known before the event that the device would not appeal to me because I don’t wear any watch at all. I am in the camp that feels like cellphones cover all of my timekeeping needs. I’ve never felt like my notifications are too far away, or unreachable (cargo shorts) so it seemed unlikely Apple would have invented something that would sway me towards wearing a watch. That’s just an honest perspective, and not a judgment.

This is actually a huge relief to me. There is no super-special feature here that I would feel locked out of. No exclusive ability that would make me ashamed. Just a “smart watch”. I will be glad to see it find a home with other people, and see those people come up with creative, and useful ways to explore what a wrist device can do, but I’ll catch up to them later if I feel like it.

Cloudy Horizons

Today’s event did nothing to allay concerns about Apple’s cloud infrastructure. They can’t organize and host their own events reliably, like Google can, they still have data plans that don’t seem to really keep pace with their competition. Their headline feature from WWDC, Continuity, was removed from betas weeks ago.

Todays event was about devices, but I was never really concerned about devices. There would be a new iPhone. There would be stuff for people to buy. Where’s the focus on what connects these things?

Apple, don’t be scared of cloud stuff, or it will be a bigger threat than an unannounced watch. Get the services right Apple, everyone else is. Understand it. Key in the sequence, Tim.

2014-09-09 16:25:20

Category: text

Legitimate Text Processing

Update: Hours after this post went up, and Jeff Atwood renamed his fork of Markdown to “Common Markdown”. All the criticism below is still 100% valid. They’re making a project that suits their own needs, but using a (new) name to suggest some level of primacy over other Markdown dialects, and Markdown itself.

Markdown was made by John Gruber, and it’s a great way to turn easy-to-read text in to unreadable HTML. It’s a limited set of syntax for things, that can be expanded on. It’s become wildly popular, particular for blogging, or for web sites that have comment systems. It’s even influenced Fountain, a similar specification for writing heavily formatted screenplays with just plain text.

Some people have to write code that supports processing Markdown text in to HTML. Many people hewed closely to what the original tool generated, which makes sense. Then people ran in to cases that weren’t precisely explained, or areas where the tool didn’t have what someone wanted. This means that sometimes things will produce different HTML code, but even that doesn’t always look wrong in the browser. More often than not, people wanted to add on features.

People called their Markdown implementations something clever. Like “GitHub Flavored Markdown” or “Python Markdown” or “Kramdown” or whatever. So here you have a ton of little tools that do slightly different things — either intentionally or unintentionally.

This drives some people nuts because they want there to be a proper way. They want the correct way. That’s cute, because the same people that want a correct version to refer to, and test against, are the same people that make their own syntax features.

Here, let this guy lacking self-awareness explain how he oversaw two different Markdown implementations:

We really struggled with this at Discourse, which is also based on Markdown, but an even more complex dialect than the one we built at Stack Overflow. In Discourse, you can mix three forms of markup interchangeably:

  • Markdown
  • HTML (safe subset)
  • BBCode (subset)

If there was a standard, Jeff would still have ignored the standard if it didn’t fit the products he made. The flexibility to make your own dialect trumps adhering to anything. This is the point of every Markdown service.

Jeff highlights John MacFarlane, creator of PanDoc, and a tool John made called Babelmark. The tool lets a person compare the code generated by the default behavior of a variety of Markdown tools. Example. This is neat, and clearly, you can see that the code is different. If you flip over to the preview, you’ll see there’s not much visual difference here.

I know, horror of horrors, it’s not conforming to one, specific thing that can be tested and verified. Pass or fail. That would be neat and tidy, wouldn’t it? It ignores reality though.

In the “Standard Markdown” spec, they include GitHub Flavored Markdown’s “fenced code blocks”. Oh! Well, would you look at that! It’s a feature that serves the needs of one of the “Standard Markdown” contributors. It has nothing to do with the original specification of Markdown. This isn’t solely about removing ambiguity, of course, it’s about making the Markdown someone wants in to the correct Markdown.

Where are the tables? Tables are not a “core feature” like GitHub fenced code blocks. Where’s ids for headers? No one needed it, but Jeff agrees about maybe putting it in. Where’s the metadata storage? Guess no one needed metadata storage. Maybe they’ll come later on, and we’ll have “Standard Markdown 2.0 Compliant” badges we can adorn our blogs with. Maybe we can put a special header in our text files that says what the human-readable text should be processed with? You know, like “!#/usr/bin/StandardMarkdown/Official/2.0.1.a/” Something easy on the eyes.

This blog, which is really simple, and dumb, uses Python Markdown. It offers a series of extensions that can be enabled, disabled, and configured to suit my needs. I use metadata to store things like the title, and publish date. I use a table of contents package to create anchors for the headings. None of this stuff is supported by Markdown or “Standard Markdown”, and Python Markdown doesn’t even do it out-of-the-box.

Byword is my writing app of choice. It includes certain visual cues for Markdown elements based on the popular MultiMarkdown (MMD) syntax. I don’t get visual hinting for all of the elements I write that Python Markdown will convert. That’s fine. It’s great that neither strictly adhere to Gruber’s original Markdown. There’s enough here to make this all work smoothly, and I’m not surprised by the outcome very often. The alternative is that I have a rigidly enforced system that does not do what I want it to do.

Like Stack Exchange, Discourse, or GitHub, we all have needs. This is here to make things easier for us. If we have to have some cockamamy specification laid down that all must yield to, then I don’t find it appealing. Is the “Standard Markdown” team going to allow all these customizations? They fly in the face of what they deem correct. Will every customization need vetting and approval through some Discourse board where I’ll have very little sway?

I Can’t, With That Name

A petty, vainglorious power-grab of a name. What’s in a name? That which we call a Fork, by any other name would be just as forky.

  1. Standard - This is like telling everyone you’re cool. “Hi everyone, I’m Cool Joe! Come hang out with me!” Congratulations on jinxing yourself? The iPhone is not called “Standard Phone”. Also, as I’ve established above, this is only standard in name only. A few guys made this in secret to scratch their own itch.
  2. Markdown - Lots of things use “Markdown” as part of the name of their implementation of Markdown. The Python library I’m using does this. It’s usually not paired with “Standard”, “Official”, “One and Only”, or “Legal” to imply it holds some special place. This is, after-all, a fork.

This is about legitimizing their fork over all the others. Not just another fork here, this one is named “Standard Markdown”!

For someone that says he loves Markdown, Jeff doesn’t seem to understand anything about why it is popular. Or why attempts to rein in the wild sprawl are bound to fail.

2014-09-04 13:20:51

Category: text

Analog(ue) #3: 'White Whale Syndrome'

Welcome, once again, to the all-podcasting all-the-time blog! This one is Gorn-free. It is in an outline format, followed by self-deprecating stuff! Isn’t that fun!

(Ragtime music)

  1. All y’all know Casey is Southern.
  2. All y’all know Casey’s the internet’s favo(u)rite.
  3. Followup editing noise.
  4. Tagging, and checking-in, are controversial because it can provide access to you in a way that varies from creepy, to overly-familiar based on who has access to that data now, or in the future.
    1. Hypothetical: Casey has a stalker — Definitely not. Who would listen to every episode? And then blog about it? And tweet?
      1. (Looks away)
      2. (Loosens collar)
      3. (Clears throat)
      4. (Continues outlining)
    2. Hypothetical: Someone sees a tweet sent directly to someone else, and decides they should show up anyway.
    3. Hypothetical: Same scenario, but the person is already at the same location you are.
    4. Hypothetical: What if your friend, ‘Joe Smith’ shows up at the bar and even though he’s your friend, he’s not the friend you want there.
  5. Identity: What do you like to be known for?
    1. Casey mentions his wife, and she identifies as a teacher, and has since college.
    2. Myke identifies as a podcaster. Then he says other stuff.
      1. Myke is a “marketeer”, and Casey says it sounds like a Disney property. I think it sounds like a different Disney property. Myke does not identify as any of those things.
        1. Friendship sacrifice is here, under identity. I only took AP Psychology, so I’m not qualified to speculate about why that is. Seems like it’s related to friends in meat-space not understanding his identity as a podcaster.
        2. Online friends are a component of Myke’s identity.
          1. Being with those internet friends, in person, cements a relationship.
          2. Myke needs human interaction, still, and would go to a co-working space rather than sit in his room.
    3. Casey has a dual identity.
      1. A J.O.B. job that used to define him more.
        1. It doesn’t satisfy him as much as it used to, but it is still a part of him.
      2. His internet life, twitter, internet friends, podcasting, that he’s getting more from these days.
        1. This is why Casey is so obsessed with Twitter.
          1. Healthy.
            1. Myke and Stephen are good.
          2. Unhealthy.
            1. Ignoring Erin, or real-life friends.
  6. Internet followers.
    1. Myke.
      1. Ego: Loves seeing follower count go up, and just looks at it every now and then.
      2. Business: Seeing people from companies, sponsors, that follow.
    2. Casey.
      1. Affliction (no, not that one) — White Whale Syndrome.
        1. You really want to be followed by someone, and then they do follow you, and you feel great. It is an affirmation that you exist, and that they’re at least interested in you.
        2. “Why doesn’t John Smith follow me? We’ve exchange tweets?” — What does Casey have against the Smith family? Jeeze!
        3. Once you get that follow, then you just have someone else you want to follow you.
        4. Like a crush, or not being picked for sports, or really wanted to be friends with the cool kid at school.
        5. Everyone, to some degree, gets some amount of their identity from twitter followers. Even the amount, and wanting more than the amount they currently have, defines part of them (I like how the first number Casey throws out is 1,000.) because they want what they can’t have.
  7. Online, or Offline, are you the same person?
    1. Casey.
      1. Hesitates on tweeting every little thing he might otherwise say. He’ll coarsely filter his thoughts.
    2. Myke
      1. Cursing.
        1. Joe Steel, in the chatroom, mentions Myke curses in the livestream on occasion. (This was the minute I tuned in. I probably shouldn’t have opened my mouth. Why did I open my dumb mouth?! Joe’s identity is opening his dumb mouth.)
        2. Myke can filter cursing because of his parents, and he tries to keep his podcasts clean, but swears often in real life.

I totally understand the followers stuff. I would never have said it if they didn’t mention it first, because honestly, I’m worried it makes me sound super self-absorbed — which, uh, I am.

There’s something I’d add to the ‘White Whale Syndrome’ and that’s shame. I’m really good at adding shame! What if you get what you want, and you feel like you didn’t deserve it, or you’re concerned about them seeing something unflattering about you? I guess that’s more ‘Monkey’s Paw’ Syndrome? Jason Snell picked me for the kickball team — I can’t disappoint Jason Snell! I was fine disappointing the rest of you. (Wink.)

In that ‘Launch Anxiety’ post, I mentioned freaking out about Myke’s retweet, and the pressure of having someone following you. Sure enough, Myke retweeted my blog post about how much his retweet freaked me out and Jason Snell followed me. I had thought it would be cool if Jason Snell ever followed me, like Casey mentions in Analog(ue) #3, but it’s actually kind of intimidating. I’ve already managed to disappoint him by not liking a Doctor Who episode. Good job, Joe.

It’s not even really about internet-famous people, there’s an aggregate pressure from all the followers. Myke and Casey talked about the follower count numbers increasing, and that’s an encouragement. That they weren’t concerned about the number decreasing — neither am I. What I am concerned about is that the higher the number gets, the more likely I am to be a jerk to someone. To not reply to a tweet about Defocused in the way they wanted. After all, I want to be funny, so doesn’t everyone contacting me want to be funny too? Don’t they, sometimes, want me to follow them back? God, what if I’m someone else’s ‘White Whale’? Like in Romy and Michele’s High School Reunion?!

The part that’s far more embarrassing is that I tuned in to Analog(ue)’s livestream late, at the very tail end. I joked that they could start the episode over again, but I shouldn’t have because it immediately seemed obnoxious to me, and doubly so after listening to the entire podcast. After their recording, Casey and Myke kept the livestream going and complimented Dan Sturm and me. It was extremely flattering and extremely uncomfortable. Just like with the white whale syndrome, of getting that Jason Snell twitter-follow, I got a barrage of compliments from two people I respect, and I immediately felt like I didn’t deserve it, and that they should not have said anything so nice to me. On the one hand, I knew it wasn’t part of the episode, but on the other hand, it still felt like I was getting attention I wanted, but didn’t deserve to have. (Casey was extremely complimentary.)

I used to think that my career defined me, much like Casey, but I want to be liked, and that’s never happening with my career, so it seems to hinge on my commentary. That sounds kind of dull, right? I am not going to start a podcasting empire, and I’m not going to be on the Biggest Podcast Ev4r, but I want to create lots of little things. Most of this identity — this personal brand — is trying to be entertaining with my observations. Not a comedian, but entertaining, hopefully. If I assert a little ego here, I’d say I’ve managed to do that, to a degree. I’m not wildly popular, but I obviously have more people interested in listening to me, and reading me, than I had a year ago. There is a certain measure of success there. But if they’re interested in my observations, than that’s sort of like being interested in a mirror. In holding a mirror up to a podcast episode, a book, or a movie. I am not the true source of what’s interesting, and I don’t think I have that capacity.

2014-09-01 00:17:50

Category: text

The Incomparable #209: 'One Gorn Limit'

The Incomparable started a new initiative to discuss the best, and worst, of Star Trek in a series of episodes. It is required listening for this post. I’ll pretend that I had a turn in the draft roundtable discussion after Brianna Wu, Scott McNulty, Tony Sindelar, and Jason Snell:

Best Episodes of Star Trek (First Pick)

My pick is ‘Cause and Effect’ from TNG. This is pure Star Trek all the way through. There’s a mysterious puzzle to solve, with details bleeding over from previous iterations in the loop they’re all stuck in. Also, the Enterprise blows up (a lot), which is always interesting.

Worst Episodes of Star Trek (First Pick)

I’m going to side with the ‘offensive’ critics over the ‘organ stealing’ side. The episode that precedes ‘Cause and Effect’ — ‘The Outcast’ that I didn’t like as a kid. I saw it, and thought the aliens were idiots, and the conflict in the episode made no sense. It wasn’t until years later that I learned this episode was supposed to be their episode on sexual orientation — their gay episode. It’s really a very weak gender identity episode, which is not the same thing. They were scared of how their audience might react to gay characters. The character of Soren is tragic, but the tragedy is undermined throughout the episode by the decisions of the show’s producers. Even making her brainwashing a flawless success is dumb because then it implies that reeducation treatments are good, instead of, you know, horrible.

When David Gerrold (a gay writer on TOS, TAS, and TNG) came forward with his story that was an allegory for AIDS, the producers didn’t want to do it because it featured two guys in a relationship. Eew, gross, icky. It was rewritten by another writer, to remove the gay characters (still wasn’t produced), and Gerrold left.

Questions about sexuality on Star Trek kept getting asked, and five years later, Rick Berman went with ‘The Outcast’ to be the episode that would explore the issues, but it was about male and female gender relationships, not about same-sex relationships. This episode with an androgynous race that condemns gender is pretty much the opposite of what they should have been doing. They also cast the character that Riker kisses with a female actress, playing a character that identifies as female and seeks a male, which really is not all that unusual. Rick Berman was worried the audience would find two men kissing “unpalatable”.

There have been several episodes featuring women with feelings for other women, but not a single one about men. The later series (DS9, Voyager and Enterprise) even ran while Will and Grace was on the air and the best they could do was hedge with Trill gender swaps and a non-same-sex allegory about AIDS with forced mind-melds. JJ-Trek is in the unfortunate position of having their main characters shipped to them from the 60s, so it’s unlikely anything progressive will happen now unless they add characters.

‘The Outcast’ - The least-progressive progressive episode ever.

Best Episodes of Star Trek (Second Pick)

‘Sacrifice of Angels’ is the second half of a Dominion War episode from DS9. Things had been going very badly for the Federation, for many episodes, and this is where things turn around. This is like Return of the Jedi only the Ferengi are less annoying than the Ewoks. The Dominion is convinced that they have the upper hand, and it’s all lost. All of it. Gul Dukat’s own daughter, the daughter he sacrificed his career for, reveals her treachery and she’s fatally shot by Damar. Gul Dukat collapses. Marc Alaimo is so incredibly fantastic in this episode.

“I forgive you too.” - Broken Dukat handing Sisko the baseball that Sisko left when the station was taken.

Worst Episodes of Star Trek (Second Pick)

Jason stole my pick, drat. My backup is ‘Profit and Lace’ from DS9. In DS9’s defense, they can’t all be winners, but WHAT WERE THEY THINKING?! The episode was meant to be a farce, but it’s so, so, so ill conceived. I love wacky, comedic episodes of Star Trek — like Voyager’s ‘Tinker, Tenor, Doctor, Spy’, or ‘Message in a Bottle’ — but this is just atrocious.

Best Characters of Star Trek

It’s a toss up between Spock and Data, but I’m going with Data. He has a very unique struggle, wanting to experience emotion, and wanting to be loved and respected. When I was a kid, I really identified with the episode ‘Hero Worship’ because I really looked up to a character like Data and see myself in the role of Timothy Vico. (Spoiler Alert! This episode is not as a good as I remember it being! UGH!)

He’s been on trial for his rights as person or appliance (conflicts that The Doctor of Voyager experienced), and his right to procreate.

Worst Characters of Star Trek

Travis Mayweather makes me sad. Tell me something about Travis. He’s a really good pilot, right? OK, well that’s good, because he is a pilot. Kind of important to be good at piloting ships when you pilot ships for a living. He’s a main character on Star Trek: Enterprise and he does… uh… and his arc over the series is… uh… Wasn’t there something about parents and freighters? Or something? Or that xenophobic reporter girlfriend that tricked him?

Hoshi and Malcolm are both primarily defined by their job on the ship, to the exclusion of almost everything else. They’re still more fleshed out than Travis.

What a total disappointment. Like all of the Enterprise characters, the Mirror Universe version of Travis was far more interesting and we barely saw that one. He’s by far the least developed member of the main cast of the show. Enterprise really focused a lot on Archer, T’Pol, and Trip, to the unfortunate exclusion of all of the other characters.

2014-08-31 16:35:28

Category: text

Defocused 11: 'I do This for a Living' with Casey Liss ►

Welcome, once again, to my Casey Liss and Myke Hurley fan club. While Casey and I were scheduled to be on John Chidgey’s podcast, we made arrangements for him to appear on Defocused. Since we try to anchor the podcast with a movie to discuss, we asked Casey to pick one. He selected Collateral, which I had never seen before (I typically avoid Tom Cruise movies). It’s a great movie, and everyone should go check it out if they haven’t already. I really appreciate Casey’s selection for the show, and I hope others do as well.

You’re not going to miss anything if you’re not completely caught up on Defocused. Dan and I are terrible people. I tease Casey about Firefly on Twitter, and then I watched all the episodes. There, all caught up.

After listening back, there were a few nits I neglected to pick on the show:

  • The helicopter shots. So many aerial helicopter shots. They don’t really serve as good establishing shots, because they don’t have much to do with where characters are, or are going. They’re kind of night-shot filler.
  • If you’re watching the very end of the movie, there is some distracting greenscreen work that detracts from the acting. It’s a real shame.
  • The way the police are dressed, and their hair, is … uh … something.

2014-08-27 11:30:21

Category: text

Not the Twitter We Want, but it's the Twitter we Deserve

Everyone, look under your seats, you all have favorites! I have talked about my opinion on favoriting things in the past, but to quickly summarize: I favorite for a variety of reasons, but I favorite things often. Their use can impart something to the conversation you’re having in a way text would not, and they can also be simple bookmarks.

Since a favorite alone can be used as a form of communication, it is important to know when one has received a favorite. Which means notifications need to be enabled for them. However, if you use Tweetbot, those notifications are not sticky, they go away as soon as they pop up in the app. If you want to know what happened, you need to open the official Twitter app, or the Twitter website, and look at the “Notifications” section. This is why I juggle two apps, back and forth. I hate the regular Twitter app’s timeline, but I need the context of what was favorited to steer a conversation, or to know that a conversation ended amicably. People often favorite the last tweet in a conversation as a form of punctuation, that they enjoyed the conversation, but it’s over for right now.

None of this mattered at all until this week when Twitter changed how they displayed the timeline in the official apps. Now, my promiscuous picks, pokes, and polite nods are going to appear, at random, in the timelines of random people that follow me on Twitter. These out-of-context artifacts will just hang there, taking up about 1/4 of someone’s visible timeline on a mobile device and leave them scratching their heads. Attempt to follow the conversation from one of those tweets and you’ll be totally lost. Some sarcastic interchange might appear to be closely-held beliefs without proper context. Will people just see self-contained gems of 140 character insight? Only tweets with links to news, or products?

Also troubling, are two other injected types: ‘[One of your followers] follows’ and ‘From Twitter’. The ‘follows’ one might be familiar to anyone that’s accidentally looked at Twitter’s ‘Discover’ feed, only now they’ve seen fit to migrate it to the main feed. You get a popular, recent tweet from that ‘follow’ presented to you. These have so far included things like Buzzfeed: “John Krasinski and Emily Blunt’s #IceBucketChallenge is why they are the best couple ever [buzzfeed pageview metric link]” and Cory Doctrow “Child arrested after writing story about shooting a dinosaur [boingboing]”. It offends me, not because I find Buzzfeed’s desire to share cynical, or BoingBoing’s news old, but because some algorithm has decided that these things suit me. That these are the things that I will click on, the content I will consume. It makes me want to shout, “YOU DON’T KNOW ME!” But, in a way, they do know some of me. They see my actions and try to infer meaning from them, which is not the same as understanding me — yet. I’m sure they will get better at it, which also disturbs me.

The same goes for ‘From Twitter’ only it’s using some mechanism that makes no sense to me. I’ve only ever seen one of these, and it was for someone I do not follow, and I don’t think anyone I follow, follows them. There is one connection, Buzzfeed, so maybe it’s topical? “Meet AdDetector — the browser plug-in that labels native advertising with a huge red [sic] banner [Wall Street Journal pageview metric link]” The algorithm is so cynical, and inept, that it selected a story about the offensive injection of reading material people do not want to inject in to my timeline.

The ‘Discover’ tab is crap for this very reason. It is a company highlighting things that align with how they would like me to use their product. Twitter’s appeal, for me, has been in my ability to select who, and what, I see. It was clear before, you followed someone, you saw their tweets. You followed two people that talked to each other, you saw their conversations. You’d only see tweets from people you did not follow when they were retweeted.

Twitter Hates Completionists

Twitter wants to control what I see. When I’m out of tweets to read, it wants to pick one for me. When I’m scrolling through, it wants to put something that is closely aligned with its interests in the list. They want me to amplify the voices of the popular so they stay popular and engaged with the service — especially publications, and large blogs, that tweet frequently.

A real peril is that when I start from where I last read twitter, I will read everything, as it unfolded. I will see the first news post instead of a promoted one. I will see the first time something funny was said, instead of the one that has been selected for popular reinforcement. Most importantly, I will read everything, and stop using the service until later, because I know Tweetbot won’t lose my place.

Tweetbot isn’t going to exist forever, neither will any of the other clients. Their functionality is hamstrung, and their profitability is constrained. Where’s the update to Tweetbot for iPad? Why am I still using an iOS 6 app on the eve of iOS 8’s launch if for no other reason than it’s not in the developer’s interests to release it? Twitter would like it very much if Tweetbot went away.

The worst thing in the world, from Twitter’s perspective, is for me to read only what I want to read. To see only what I want to see. What I want does not make them any money. What I want is at their expense.

Twitter, as it existed for its first few years, was not profitable. It needs to be profitable. Services can’t run on adoration and appreciation, they do need money. Investors want a return on their investment greater than just breaking even.

Marco Arment, John Siracusa, and Casey Liss talk about this, and all things related to the Twitter experience, in Accidental Tech Podcast episode 79.

“Web 2.0” was a great lie. The power of social, connected data on dynamic webpages — for free. We participate in the lie every day. Look at Tumblr, acquired by Yahoo, and making its own moves to lock itself down. Their “Sponsored Posts” have animated gifs, so they’re still cool, right? Axe body spray is cool, right? They even insert suggested blog posts now. It doesn’t hurt that the suggestions are from popular blogs that are selling products. I’m sure that’s a coincidence!

Let’s Make Our Own Twitter! What Could Go Wrong?

Diaspora Still Works!

Diaspora, a federated network of nodes that presented people with a Facebook-like interface received funding from Kickstarter in June of 2011, but didn’t launch fast enough, it took years. Nodes are still up and running, but don’t pretend it achieved its goals of providing a social space, they just built a mostly empty city that will live forever. You can go use it right now, if you want, but almost no one is.

Generic Dot Cereal

App Dot Net is the most infamous flop because its slow-motion death is ongoing. App Dot Net was the name for a social application platform, but what everyone associated the name with was “Alpha” the Twitter clone that the App Dot Net employees made to showcase their social application platform. This was the biggest danger. Mentally, all App Dot Net was, was a Twitter clone. The technical underpinnings did not matter to the vast majority of users. The other “cloned” services like Backer weren’t even a blip on most people’s radar. People could build whatever they wanted to build, but it didn’t seem to matter because it was all about the Twitter clone component.

How often to clones of things outpace the original? Usually, only when the clone is cheaper than the original, and even then that’s not guaranteed. You might buy generic, bulk toilet paper, or generic breakfast cereal, but there will be things you choose not to compromise on. App Dot Net was never going to be cheaper than Twitter. That means you need your clone to do something novel. App Dot Net had a slightly longer word length. I would not call that a distinguishing feature for your Twitter clone.

App Dot Net was also going to be different from Twitter, or Facebook, because users would pay to use the service. Here, look at their funding page, that they no longer have online. It’s hard enough to convince people to use a service that, on the surface, is ‘just’ a Twitter clone, but now you’re asking people to pay money for it. This limited the number of people using the service, which limited the conversations, and made for a really uninteresting social experience. All the while, people were still posting on Twitter, because that’s where conversations could really happen. The developer-friendliness of it was immaterial to people posting about their food, or what was on TV.

App Dot Net eventually made a free tier, and turned the platform in to a “freemium” service, but by that point, all the new users could see was a Twitter clone full of straight, white, male programmers and technology enthusiasts. It was about as fun as a party organized exclusively by engineers. I joined around this time, telling myself I would pay for the service if I liked it. I was certain I would reach the limit for the number of people I would follow because it was so low compared to my Twitter account. Turns out, that there wasn’t a reason to follow most people because they were cross posting with Twitter, and many accounts from people active at its founding were unused, or only occasionally updated. Engagement is a word I love to make fun of, but seriously, there was no engagement.

The Tent is in the Jargon Cupcake

Tent/Cupcake is a total mess. It is a service like App Dot Net (the platform, not the “Twitter Clone” part). Tent is more like Diaspora in that it’s decentralized. Anyone can create a Tent server.

Tent is decentralized like email and the web. That means that users interact with each other in the same way whether they’re on the same service provider or across the world. That means no one company can control the ecosystem. If a service provider starts behaving badly, users can move to another provider or set up their own servers, taking their data and relationships with them. Unlike email, address books are updated automatically so migration is seamless.

That sounds really neat for a second until you realize that the positive part of their metaphor is email. Almost everyone uses free, ad-supported email, and even services that mine email for ways to sell ads to you. You can migrate wherever you want, but will you? Won’t you just stick to the free ones that will probably suck in some way? Being really excited about Tent, is like being really excited about IMAP. IMAP is not your email, it’s what allows your email to happen.

Cupcake.io is run by the guys that make the Tent protocol, and they provide a “freemium” experience — like what App Dot Net added. What’s the freemium experience like? I don’t know! What are the apps like? Couldn’t tell you! It’s a total mess. This website is not how you get someone to try your service, it’s how you make someone close their browser tab and go back to Twitter.

Great apps

We have apps for everything from microblogging to file sharing. All our apps are free to all users. And of course you can use any Tent app in the world with your Cupcake account without limitation.

Where are the apps? I don’t sign up for things just because you’ve prompted me with the field to sign up, show me what’s at the end of this process. No matter how much I don’t like Twitter, it’s still socially viable and your service is only jargon to me.

When Dinosaurs Roamed the Earth

In the beginning, there was darkness, then we had an explosion of social, web-centric applications. Services would rise and fall. People would have accounts on multiple services. Services having APIs to make clients was a cool idea. This bubbling primordial ooze produced lots of things that have died off. Even services that promised to aggregate all your other social feeds, like Friendfeed (bought by Facebook), and SocialThing (bought, and destroyed by AOL. Fortunately, we got the last laugh because the SocialThing guys ruined AIM.) Typically one led to the next though. Friendster was followed by Myspace, which was followed by Facebook. Then Facebook stayed. The reason this rise-and-fall was broken was because Facebook refused to be acquired, and as such, it had to have a business plan. They turned their creepy, awful, social pressure in to a tool to lure in more investors and IPO. They had money to do their own acquisitions. This caused an imbalance, because before, things would just shutter, or get acquired and shutter. Google, Yahoo, and AOL were, and continue to be, really bad at social platforms. They don’t even know what to do with their instant messaging platforms most of the time.

Twitter rose up around the same time as Facebook, but much smaller, and it’s been its success on handheld devices that’s enabled it to run in parallel with Facebook, which continues to fumble mobile. Facebook’s biggest success in mobile is from an acquisition of a company that lets you take photos, something they were incapable of doing internally.

Twitter is Facebookifying themselves because they want the financial success of Facebook. The controlled timeline, even superficial things like banners and avatars. And Facebook is Twitterifying themselves with the digestible tweets. And they’re Snapchatifying themselves with their messaging stuff. They’re Flipboardifying themselves with Paper. Tumblr is changing from a blogging service in to a feed-focused service, like Twitter and Facebook. Where else can you put the ads?

Where once there were tons of silly, ridiculous, obnoxious, fun social services, now there are only a few and they’re interested in maintaining themselves by eating the small services, and morphing parts of themselves in to one another.

If this sounds familiar, it’s because it’s what happened with all the original web companies. AOL, Google, and Yahoo all needed webmail, they all needed news services, they all needed blogging platforms, photo services — Hell, Yahoo bought Tumblr.

Seeing the craptacular nature of things doesn’t matter. We get riled up every six months on the internet about how social networking giants work. So what? Our collective outrage as resulted in a few abject failures we can pat ourselves on the back for. The next big thing probably isn’t going to come from any of these nerdy, nice people making things, it’s probably just going to be another Twitter or Facebook that seems more attractive to us because Twitter and Facebook will continue to be less attractive to us.

As the social survivors of “Web 2.0” gorge themselves on gifted youth they start to move further away from being things people enjoy. They become business-degree-managed sameness.

If this evolution in to generic commodities continues, then when will we see the next fish crawl out of the mud? What will that fish be? I hope it’s an awesome fish that I won’t hate for a few years.

2014-08-22 11:35:51

Category: text

Launch Anxiety

Yesterday morning I listened to Myke Hurley and Casey Liss on their brand new show Bionic Bonanza The Casey and Myke Variety Power Hour Analog(ue). The show is ostensibly about the human side of tech podcasts, and therefore, it is kind of experimental. Myke had Casey on as a guest on a similarly themed episode of CMD+Space many months ago. Strangely, I have elements that I find I can now identify with more directly — it’s like their launch experiences are analogous to my own. Huh.

Casey discusses the fear, and uncertainty he had about the launch of Neutral, the show that propelled him from a relatively unknown, to a slightly more well-known unknown. Casey talked about fears that surrounded that launch. When he kept checking his phone while he was with his wife, Erin. He couldn’t believe it.

Myke, similarly analogously, relayed the story behind his announcement that he was leaving the 5by5 Podcast Network. He was filled with abject terror. What would people think of the move he was making? Would people be upset with him? Would people be happy for him? Each second, each minute that passed built more tension for him because he received no immediate feedback.

This is Where I Make it All About Me

Dan and I had recorded a bunch of Skype calls podcasts that we never released. Some of them, Dan and I just outright trashed. We had no real concept originally, a different configuration of hosts until timing became uncertain, and we had tried to start the show in different ways. At the conclusion of a short freelance project, I was feeling tapped out, creatively unfulfilled, and I wanted to actually do it for real this time. We had something that was mostly useable. We managed to speak to one another in a fashion that we could stomach listening back to. Dan still edited out some stray conversation threads, and trimmed some of the start and end. The result sounds pretty natural, and I do understand the desire to just put up raw audio, that it is more pure, and truthful, but a little primping never hurt anyone.

The problem was that while Dan was editing, I was terrified about two things. Firstly, I was scared that he was putting in effort in to editing something that we wouldn’t like when we listened back, a total waste. Secondly, I was petrified about what to do if it was good. I never had a scenario in which it was mind-blowing, but I thought good-enough was a possibility.

When Dan sent me the file, I knew it was good enough. Not like in a settling for it kind of way, but in a “I would put my name on this” kind of way. I can’t call it pride exactly, but I had a firm enough opinion that Dan and I were worth listening to this one time. If we didn’t release it, then we’d never know if we were worth listening to a second time.

Dan and I fussed over a few tiny things, then Dan made the website with Squarespace. We had no idea what to call it though. We never got far enough to give what we were doing a name because we never had confidence. We bounced ideas around for a few hours. Obviously, we had a VFX nerd bent to it, but we also talked about movies and Dan’s opinions on iOS note-taking apps (LOL). In the end, the VFX-ness won out and we called it “Defocused”.

Dan and I bounced ideas off one another for the album art, the logo. Dan wanted to use a colorful test pattern, and I wanted to use a 3D rendering of 3D text with a very tight depth-of-field effect from the “O”. Dan put the two together and hazed out the background, and that was the show art. Simple, and pretty literal.

It was all set. There were no more excuses.

Where’s the Trigger? Where is it?

Dan and I were faced with the interesting problem of not knowing when to launch it. Time mattered. Ideally, we’d try to stick to whatever day and time we started off with. It has been said that consistency is hugely important for blogs, podcasts, web comics, etc. People want to know when their episode will be there.

However, I was also terrified of launching in broad daylight. I was worried we’d either get totally crushed in the Eastern Time Zone Daily Tweet Deluge, or we’d be totally unnoticed. Naturally, we launched in the middle of the night. I had been releasing a few, blog posts around that time, and it felt kind of safe. Dan and I thought the late hour would give us a chance to get some slow feedback, and to tweet about it some more during the day if things seemed positive.

I couldn’t sleep. I was in bed, my iPhone in my hand, refreshing and refreshing Twitter, and my earbuds in, listening to the episode for the second or third time to make sure there wasn’t a reason to pull it. I’d go to Tweetbot, then back to Twitter, and then I’d go find the show’s account and see if it had mentions. Now that I had launched it, I found that I was in the same situation as Myke and Casey — That moment of paralysis that seems to last forever when you don’t know if you did something you will regret.

I was mulling over this terror when Myke tweeted that he listened to it and retweeted the show to the gajillion people that follow him.


Here I was, lying in bed, eyes wide, heart racing. Was Myke just being nice? He really liked it? What if he did like it, but only because he talks to us on Twitter? What if all those people that would see his retweet would go in to our episode expecting a Myke Hurley podcast? Neither Dan, nor I, are Myke-esque in any way.

Most importantly, what did this mean for the second episode? God, now we were really in the shit; we had to make more than one for sure.

Dan and I pelted each other with “OMG” messages of disbelief until I started to notice tweets trailing off. I fell asleep at some point then. I woke up a few hours later, and checked twitter in a panic. Then again. I basically have not had a solid night’s worth of sleep on any of the night’s we have launched an episode. This terror and uncertainty keeps waking me up. What if my server I host the file on goes down? No one can fix it but me. What if I lose people because of that?

This is a lot of stress for something that provides no commercial value to Dan, nor myself. We continue to do this because it is fun. The adrenaline hit of panic is fun in a demented way. After it all dies down, and you see those few tweets that someone liked a thing you did, you’re on a high. It’s like skydiving, or bungie jumping (I refuse to do either of those, they seem dangerous). Every time some hugely important podcast person says something to me, I am arrested with uncertainty, and gratitude.

Launch Feedback

There are different forms of feedback you can get over Twitter. Someone will favorite a thing, they will retweet a thing, they will reply back to you, and they can also follow you. I don’t formulate tweets specifically with any of those outcomes in mind. If I merely get a chuckle that I never know about, that’s fine. However, I live in abject terror of being followed by someone I respect. Someone that has my genuine respect will see everything I say. I see the follow notification, I get excited that they like me, and then I get a feeling in the pit of my stomach that they will utterly regret their decision to follow me once they see my nonsense. That the quantum waveform will collapse, and the cat in the box will be dead. I guess I’m just a positive-thinkin’ kind of guy.

Podcasts don’t even give you that level of interaction. Sure, people will interact with the show account Dan runs, or they will message Dan and me directly, but that’s not really a guaranteed outcome. When I look at feeds, or file access rates, or any of that, I see that most listeners are made of dark matter. I can observe their effects, but I have no idea who they are, or what they like about the show. Where are you, silent listeners?

This is very similar to the gap Myke describes where there’s no feedback, the feeling that maybe no one cares, only in a certain way, I can see that they care enough to have subscribed and have not unsubscribed yet. It is strange to think of the enormous chasm between observation and interaction. Surely, I admit that I am vain enough to care what people I’ve never met think, and I’m sure I care enough about what a random, important listener might think.

An interesting dynamic came a few weeks ago with the launch of Overcast. The recommendation system linked to Twitter was a new and innovative approach to looking for podcasts to listen to. It’s also a great way to spy on the people you follow to see if anyone of them are recommending your show. (Sorry guys, I’m spying out of love.) That recommendation isn’t sent to the podcaster, it isn’t available on a leaderboard, it isn’t a graph widget, it isn’t an analytics package for purchase, it is just a small way to see if someone you follow likes you enough to think other people should check you out.

Frodo & Gollum

Hopefully, Dan and I will continue to improve at this, and it will start to feel very natural. Maybe I’ll get some more sleep when these go up Wednesdays. I’d like to keep receiving slow, steady, positive feedback. There’s no anxiety if everything’s easy-peasy.

There’s another part of me that wants to keep experiencing this adrenaline hit. To see a rapid expansion of listeners. To do new, unexpected things with the show. To have some cray-cray success that overfills my inbox with compliments.

That’s not a healthy, sustainable impulse, and maybe it’s a sin to secretly want that? Damn it, why couldn’t Casey and Myke have talked about that part?! Way to leave me high and dry here, fellas! I guess I’ll just have to tune in next week.

2014-08-19 00:19:39

Category: text

Chrisjen Avasarala is my Spirit Animal

It is hard to believe, but there have been 4 books in the ‘Expanse’ series written by James S.A. Corey. James is actually a pen name for Daniel Abraham and Ty Franck. They take turns writing and editing alternating chapters in these books. The first novel, ‘Leviathan’s Wake’ was nominated for awards, and made a fairly big splash when it came out. Many people that read the first book might be unaware of the sequels, or have not kept up with that’s happened in the latest books.

The “world” of the books is referred to as ‘The Expanse’, the human race has expanded out from Earth and colonized Mars with domes, filled large asteroids, and moons, with underground cities, and started mining distant asteroids and comets for resources. Humans have been born in space, elongated from the lack of gravity. Language is a colorful patois from the people that moved out to ‘The Belt’ and beyond. Conflict arises between Earth, Mars, and these distant settlers.

In many ways, it shares some similarities with Firefly (Ugh, I know, right?) with Earth being ‘The Central Planets’ and everything else sort of being like the outer worlds of ‘The Verse’. The similarities are superficial because there are far more forces at work in the solar system than there were in Firefly. Three-way political machinations, noir detective stories, Lovecraftian horrors, corrupt corporate research, etc.

There are flavors to science fiction, where you can have sci-fi horror, sci-fi fantasy, etc. In one half of the “author’s” point of view:

Daniel Abraham: As far as mapping the books to particular genres, Leviathan Wakes is our noir, Caliban’s War is our political thriller, Abaddon’s Gate is our haunted house story, and Cibola Burn is our western.

The Expanse will be a TV series too (don’t Firefly it.) It will be interesting to see how it gets adapted. The novels feel more like films in a series, than they feel like a television series. While I said that I didn’t think Ancillary Justice could be adapted for the screen, I can see how every single thing in these books can be adapted for the screen. Even descriptions of skips in time are discussed, as well as the way the chapters seem to cut between different points of view. I assume they will need to pad out some things and that each season will constitute approximately one book, since there are 10 episodes. I don’t think they’ll run in to the same lag George R.R. Martin has with the Game of Thrones series, because “James S.A. Corey” has been putting out one expanse novel a year, as well as some novellas.

Miller has been cast as Thomas Jane. Which, seems different from what I had pictured in my head, but not punishingly so.

Leviathan Wakes

The first book in the series starts with an abduction that goes awry. A woman is taken, and stuffed in to a locker on a ship. No one comes back to get her and she panics, escaping, only she comes face to face with something.

We spend the rest of the time wandering between the stories of a ice mining crew, with Jim Holden front and center, and the story of an eccentric noir detective trying to find the woman from the prologue, Detective Miller. The stories weave together, as one might expect, when we get to a big climax.

Throughout the novel, you get the flavor of the world, the politics, the personalities of it. There are some tropes in here, to be sure. The detective has more in common with characters of silver screen than with actual detectives. Amos, an gruff, violent, brute of a man, fills the gruff-violent-brute-of-a-man role. Holden, is your Han Solo, Mal Reynolds, dumb-rouge-with-a-heart-of-gold. Naomi is the smart one that plays hard to get. Alex Kamal is, basically, a cowboy pilot. That is not in any way a slight to these characters, they’re just fun, and they do fun stuff.

Also, there’s Fred, who’s your basic NPC mission dispatcher. He works for the Outer Planets Alliance (OPA), but he’s basically there to give Holden things to do.


The main source of conflict in this book stems from the protomolecule — a term that I have never, ever been a fan of — A thing, like a virus, that traveled to the solar system on an asteroid that was captured by gravity. It’s payload held in stasis until humans do what it does best, and try to kill other humans with it.

The real lesson here, the one that the bad guys don’t learn, is that maybe you should not try to do that. It all goes wildly out of control as protomolecule spreads, and it becomes clear that it is programmed with its own agenda, one that threatens all of humanity — as these sorts of things are want to do.

Miller, who starts out as pretty annoying and whiny, really leaps to the forefront when action happens. From the instant we meet Holden, we expect him to save the day, but it’s really Miller (literally Miller) that saves the day. The dull stuff, at the start on Ceres, becomes more interesting as we see Miller break out of it in later episodes.

The ship, the Rocinante (Roci for short) is a Martian naval ship that Holden claims as salvage, and named after Don Quixote’s horse. The name should be a huge clue that it’s an indulgence of the author(s) more than an indulgence of the characters. Like in Star Trek, how every time someone says a line from a Shakespearean play, the other person in the scene says the act, scene, and line number. People are always very well versed about these sorts of things in the future. The Roci is our Millennium Falcon. Holden uses the Roci to pursue what he sees as “the right thing to do” — to solve the mystery, to keep war from happening, to find out who is controlling things behind the scenes.

It’s the assumption of this role, where he’s the guy that’s going to save the solar system, that sees him get hit with criticism by other powerful characters in the novel. His idea of fixing things involves sending out broadcasts to the entire solar system every time he finds a clue to something, with little regard to what effect his broadcasts will have. He’s a moron, but he’s going to save people.

When Miller convinces the seed of the protomolecule’s efforts to divert from Earth, to hit Venus, sacrificing all of them, he solves his case, and saves everyone. What’s on Venus is very much alive though.

Caliban’s War

The novel starts with marines patrolling outside domes on Ganymede. You know, normal stuff. Then nearly everyone is slaughtered by a gooey monster with our new POV character escaping, Bobbie Draper, a Martian marine.

A political thriller sounds incredibly dull. It’s not even presidents, monarchs, and dictators, it’s under secretaries. Oh boy? Yay?

Surprise! Chrisjen Avarsarala is the fucking best fucking character in the whole damn, mother-fucking series! She is a surly, cynical, foul-mouthed, teeny-tiny grandmother. Above all else, she knows how to play the game. Where those of us (sane people) find the prospect of intra-government and inter-government wheeling and dealing to be extremely tedious and corrupt, she takes great joy in wielding it to crush others. She even uses the inefficiency of government as a weapon to force others to bend to her. Naturally, this makes her unpopular with a great many people. She acquires a personal bodyguard, naturally, a Martian marine. The unlikely duo proceed to have a bit of fun with each taking turns being the ‘fish out of water’.

A third, new POV is Praxidike Meng. A father that has lost his little girl to a disease.

The bad guys here are related to the bad guys in the first novel, having not really learned anything from the events of the prior novel.


Just kidding about that ‘lost his little girl to a disease’ part, she was totes abducted by unscrupulous, evil scientists — my favorite kind.

Turns out, that the Protogen corporation’s buddies at Mao-Kwikowski Mercantile. The very same Jules-Pierre Mao that was the father of dead Julie Mao. Turns out, that JP is a bit of dick and he has a sample of the protomolecule that he’s still trying to weaponize. You know, because that worked really well before. This time he’s working with a doctor that likes to use children without immune systems as hosts for the protomolecule, and then the doctor puts an explosive on the infected youth in case they lose control of it. Great plan, guys.

It is immediately evident to the audience who the bad guys are, and it even becomes evident to the characters who the bad guys are, but — like politics — they have to move against this opponent carefully because JP has spent time ingratiating himself (paying them money) to more powerful politicians. Ones that have been promised the chance to bid on the weapon. It’s a really tangled web.

In the final battle, with the protomolecule-hybrid source located on Io, the bad guys fire the soldiers. A UN ship, the Agatha King, is hit by a hybrid, but the bomb failsafe fails to save the situation and the crew of the Agatha King become vomit zombies. Holden relives much of the tension from the Eros events in this cramped environment with infected marines. A minor character stays behind to detonate the King’s self destruct, which is pretty predictable. The author(s) might as well have let him live.

The part that makes me squeal with glee is watching Avasarala deal the deadliest blow in the whole book.

“This is not a negotiation,” Avasarala continued. “This is me gloating. I’m going to drop you into a hole so deep even your wife will forget you ever existed. I’m going to use Errinwright’s old position to dismantle everything you ever built, piece by piece, and scatter it to the winds. I’ll make sure you get to watch it happening. The one thing your hole will have is twenty-four-hour news. And since you and I will never meet again, I want to make sure my name is on your mind every time I destroy something else you left behind. I am going to erase you.”

Mao stared back defiantly, but Holden could see it was just a shell. Avasarala had known exactly where to hit him. Because men like him lived for their legacy. They saw themselves as the architects of the future. What Avasarala was promising was worse than death.

Mao shot a quick look at Holden, and it seemed to say, I’ll take those three shots to the head now, please.

Holden smiled at him.

Excerpt From: James S. A. Corey. “Caliban’s War.” iBooks

The title is a Shakespeare reference, The Tempest character Caliban, rebels against being controlled. It’s kind of a fault of these books that the titles are “Hey, look at me! I’m an author!” but I guess it’s better than a line someone says in the book. At least no one says, Caliban’s War in a trite way, like a Bond movie.

As for Venus, it percolates away through the corse of the novel and then spits a blob of matter out to the edge of the solar system. The blob forms a ring — a gate. We are visited by an apparition of Miller, the end.

Abaddon’s Gate

This book has the most POV character chapters of any of the books, and I blanked on all of their names when I was writing this. I’ll be honest — I really didn’t care for this book. I found it to be very dull, with things happening in ways that seemed illogical, and distracted me from really enjoying the novel as I had the first two.

We start with a kid from the belt. He’s racing in a little pod to go through The Ring, The Gate, formed by Venus’ phlegm in the last book. He thinks it’ll just really impress his bros. Unfortunately for him it was a dumb plan, and when his craft passes through the ring and slams to a slow crawl, splattering him inside the ship. The ship is no longer in the solar system, though it is visible through The Gate. It’s in a space without stars.

Carlos c de Baca (Bull) works for the OPA, and he’s in charge of the security forces on the OPA expedition to The Gate. He’s pretty bland. He means business. So much business.

Clarissa Mao, and her forged identity of Melba, are out to get revenge on Holden for putting her father in jail in the previous novel. She’s taken her money and put it in to extensive body modifications that are more like something from Neuromancer than anything we’ve seen in ‘The Expanse’ series so far. Her revenge plan is totally weird and convoluted.

Anna, a peace-loving Methodist joins the UN expedition to The Gate as a representative of the Methodist World Council. Other religious bodies have also sent envoys. There is a lot of theological discussion in scenes with her, about what the protomolecule means about god, what The Ring means about god. It’s pretty dull, and well-trodden in other science fiction books.


Holden keeps seeing that Miller hallucination and it keeps telling him to go to The Ring. You know, for reasons.

Long story short, all these competing factions vying for power want to go through the gate so they can stick a flag on whatever they find and say that they own it. You know, human stuff. This results in greedy people, and fearful people, doing incredibly stupid things that make everything worse for all of the people there.

They determine the speed that things can move before some force knocks it back down to barely a crawl. This is important, because no one wants to turn in to soup when they suddenly decelerate. Then a bunch of dumb, greedy morons vying for power violate that speed and make everything grind to a halt, braking ships, and stranding them all.

Holden has to go to the station — the thing the Miller hallucination wants — and try to turn off the field keeping all their ships knocked out.

What could go wrong? Oh yeah, some idiots are dispatched after Holden to keep him from the station because they need to enforce their claim on it. You know, human concerns.

Holden and the Miller hallucination make their way through chambers of the station and discover some sort of viewing system that Holden can use to see what happened. Unfortunately, what he sees makes no sense. The civilization that made the protomolecule, rings, and station, vanished because of an encroaching enemy that was able to spread through them, and disable their systems. They blew up solar systems to try and keep it from spreading, and then eventually killed the gate network as a quarantine procedure. No details are given, no explanation of the builder’s motivations. Nothing makes any sense with them. They just built all this stuff for their empire that did… things? We never know. I would not complain about this things being kept from the audience except for the fact that they’re literally using the computers those guys had. Computers that have the ability to make a human mind feel like it is on another world stored inside of the memory banks. It’s not like there’s a lack of detail in that stored information, guys.

Holden turns it all off, and opens all the gates that were closed for the quarantine — you know, because that seems like a real good idea.

This is why I find this third novel so frustrating. Nothing really makes much sense. You can’t go in to detail here, and leave other things as vague sketches. The characters that are added are also unappealing. Did I mention that Clarissa is an idiot? I miss the characters from Caliban’s War. Where’s my swearing-grandmother chapters?

Cibola Burn

We start with Bobbie Draper on Mars. She’s just chilling with her family and they’re chit chatting about the first colony humanity’s put out through the rings, and what it will mean for them when — BOOM! Huge explosion at that colony.

Turns out the colony was started by former OPA citizens from Ganymede — you remember that lovely place, right? The one that the evil bad guys turned inside-out in Caliban’s War? Well refugees skipped on through and landed on this planet before anyone had set up traffic control around the ring station. This is politically complicated, because Earth feels like it has the mandate to dole out who gets to colonize what. They give that charter to Royal Charter Energy. RCE thinks they own the whole planet, and that the Ganymede people are squatters.

Since the gates opened in the previous novel, all we’re left wondering is how long it’s going to take for Holden to go through them. Unfortunately, he doesn’t want to go all that badly. He’s been dodging Miller’s requests to go, and just doing simple runs. Fred and Avasarala commission Holden, and the crew of the Roci to go mediate an escalating situation on the first colony world through one of the alien gates. This is something Holden is hilariously ill-suited to do. Fred and Avasarala both seem to know he’s going to fail.


This novel is much better than Abaddon’s Gate. The fact that almost no characters from the previous novel are here, and the rest are from the first two books, should give you an indication of how weak and unloved A.G. characters are. Avarasala’s colorful cameos made me laugh out loud while I read them.

“If Fred is showing this to you, Holden, know that your home planet appreciates your service. Also try not to put your dick in this. It’s fucked enough already.”

Excerpt From: James S. A. Corey. Cibola Burn. iBooks

Havelock was always a jerk, but this novel lets him redeem himself. He’s not evil, he was never evil, he just didn’t think things through until 3/4 of this novel were over.

Elvi is a frustrating character though. She’s a mouse of a scientist. She has a crush on Holden, and she stammers when she talks to him, and she says things in such a way that characters are always telling her to say it “in English” — you know, because she’s a scientist. She was fine, I just won’t miss her if she doesn’t turn up again. I think she was supposed to be the character we identified with, our way in to this alien planet, but really that was Holden.

Basia chapters were ones I dreaded reading. He thinks in very small, selfish ways, and that doesn’t make for good reading. Even when he starts to realize he was wrong, he’s still fixated on things he can’t change, and reading someone selfishly beat themselves up is pretty unappealing. His redemption does make sense, even though, I think they still should have carted him off for a trial and some jail time.

We never have a chapter from Murty’s POV, but he is very present throughout the book and we really get a sense of how crazypants he is. His actions have a brutal nihilistic logic to them. He’d rather they all suffered and died so he could say he did his duty, than have them all live, and lose RCE’s claim.

The thing that struck me the most when I read this novel were the themes of violent, and disproportional shows of force making matters worse. This was in the previous novel too, but here, it’s a much clearer divide between the RCE security forces, and the down-on-their-luck squatters. RCE forces using military grade gear to suppress crowds, and enforce a curfew. To bug a town, and shoot down anyone that might threaten their control. This is clear abuse of power in a way that isn’t corrupt corporate science. For several characters, it is more important to be the winner, than it is to be alive. For Murty, it certainly is about enforcing rights at all costs, and making the claim stick. Assaulting, and murdering, unarmed civilians, as well as arming a militia to make sure another ship can’t escape with lithium ore. He’d rather they all died, with himself on the top of the pile, than see anyone benefit at his expense.

The tension is present with Amos and Holden. Amos wants to kill the killer, and Holden and wants to keep things from escalating. Holden’s plan is not a good one because Murty repeatedly threatens every one. Amos’ would lead to a total clusterfuck, but Murty wouldn’t have been there to feed anyone ideas on how to increase the suffering.

After all, people want to believe in their institutions of security. They want to believe that police, or rent-a-cops, or the military will keep everyone alive. That they won’t violate rights. Arguably, if the RCE Governor had not died in the crash, Murty would not have been able to assume total authority, to declare martial law.

We have a long history of abuse of police power in the United States, and in many other countries. Sadly, while I was reading this book, real-world events were escalating in horrific ways in the US once more.

The one thing that unified all the characters in the book, for a brief time, was a planet-wide disaster. A malfunctioning, alien reactor detonated and sent a shockwave around the planet. Alien defense systems disabled fusion reactors for the ships, and everyone had to survive. Even Murty cooperated and they all survived, along with the blinding microorganisms in the rain, and the toxic slime. They were all exactly the same, their guns, armor, and ship made no difference at all.

Naomi is able to get Havelock to see how his plan to just follow orders is going to get them all killed, or at least caused a lot of unnecessary death. From the POV chapters up to this point, we saw glimpses of the doubt Havelock had, as well as his sense of duty, of wrapping himself in the safe mannerisms and opinions of his superiors. The thing that sent Havelock over the edge to Naomi’s side was the prospect of arming the amateur militia he had been training. Nothing about it felt hollow or out of place, and I think it was really well executed. It was not selfish, it was selfless. He wasn’t protecting Naomi to save himself, he was doing it to protect his own men, and everyone else, too.

One really big nitpick I have is the obviousness of the blinding plague. The mechanism of it was novel, but the second they said the name of the only person not affected by the disease, Holden, I knew it was the anti-cancer meds he takes. The doctor even takes a full medical history of Holden, and surely must have known that it was the ONLY thing none of the other people were taking. It made me hard to believe any of the doctors or scientists were remotely capable people when it was so obvious.

Speaking of obvious, as soon as Naomi said she would sneak over to the weaponized shuttle and put in a kill-switch I knew things would go South. Any time a character spells out a whole plan in these books it always goes awry in some way. The only successful plans are the ones that happen outside the POV of the current narrator. Thus, Naomi’s whole plan about the switch was going to fall apart, and she’d be out in space. They should have holed the dumb shuttle on the spot and just sent their evidence back to Earth.

The other strange thing is that everyone is worried about touching the protomolecule in the first two books, and then people are rubbing all over stuff in this book. It seemed pretty out of place that no one even voiced a concern about it.

The really-really big thing that bugged me was Miller. Not the way that Miller talks, I’m a huge fan of the color he provides to scenes. What bugs me is that he has weird limits on his power and knowledge that fluctuate as needed by the story. It is also bothersome that when things are going wrong on the alien planet that Holden doesn’t use whatever limited knowledge Miller might have. For instance, when Miller gets Holden to go to the material transport network, he finds a whole cavern, nice and safe, underground. He scolds Miller that they could have used that, and Miller, rightfully, points out that Holden never asked him. They really should not have been dodging helping each out. If Holden’s people were safe, it would have gotten Miller what he wanted, sooner.

What did Miller want? I’m still confused. I thought he wanted to better understand the thing that killed all the alien technology, but he just seems to want to use it to commit suicide and take down all the alien technology on the planet — possibly elsewhere. Since Miller was part of the Earth’s Ring, carried on a lump in the cargo bay of the Roci, did the Earth’s Ring die? Did it’s higher-level functions just die and it still functions? Because everything on the alien world seems to have died. Does killing all the alien tech benefit them at this point? It was malfunctioning, but it was supposedly malfunctioning because of this artifact. Hey, wait, isn’t the artifact still there? Why is no one panicking about that?! There’s a lovecraftian hole in space that tore apart Elvi and reassembled her, killed all the alien tech, and no one’s worried that it’s still there? Who would just be like, “All right guys, let’s take Murty to justice and let these settlers keep on assembling a shanty town on DEATH WORLD.” If you wanted that ending, then at least don’t let the audience know there’s a ball of nothingness lingering around, unsolved. I know there are nine books, but at least make the ball of nothingness go away until another book can figure it out! That’s the part that feels like Abaddon’s Gate. Don’t leave nonsensical loose ends just dangling there like that.

Indeed, we close with an Avarasala epilogue where she explains that Holden was set up to fail, to slow things down. To make it so things would take longer, because it’s going to drain out the people wanting to settle on Mars, Martian military tools and ships will fall off ledgers as it shrinks away and all the colonists settle their own Death World. You get a Death World and you get a Death World! Everyone look under your seats, you’ve all got Death Worlds!

The only part of the conclusion I found satisfying was Avarasala trying to enroll Bobbie Draper. I can only hope that the next book is another swear-filled adventure of intrigue.

2014-08-18 15:53:56

Category: text

Visual Effects Glossary for People That Don't Care


I’ve put out ePub (Safari users, right-click to download, otherwise it tries to load the ePub in Safari), edited version of this post. Things are tidied up and organized to be a little easier.

The always interesting Dr. Drang posted this rather entertaining bestiary of construction equipment to help the masses. It spurred me on to, jokingly, create a glossary of visual effects terms that almost no normal human being would ever need to know.


In 3D programs, the origin is (0,0,0) in world space. The position of almost everything is in some way related to this origin (or to the camera, but more on that later.) Even in a 2D compositing package, operations need to understand where two images may overlap, and the compositing package will use (0,0) to figure that out. Where different packages choose to put 0, and which axis is which, varies between software vendors.


In 3D software packages, this is your window to manipulating objects in the world in 3D space. It’s not accurate for lighting, shading, or texturing under default conditions, because it is designed to be fast and interactive.


The act of producing a raster image from either a 2D or 3D set of assets.

Batch Render

Rendering every frame of a batch of frames, one after the other, on a single machine.

Render Farm

A bunch of a computers that are networked together and accept ‘jobs’ commands to render a frame, or set of frames, and return the result to disk. This is so no one has to batch render frames that take minutes, or often hours, to produce.

Frame Padding

Renders of each frame receive a unique file name when written to disk, with the frame number included in the name of the file. To make sure these files look pretty, the number is padded. for example:




World Space

This is pretty obvious, this is everything as it relates to your scene’s origin. That is your ‘world’ in the file.

Camera Space (Screen Space)

Everything relative to the camera in the scene. The camera is an infinitely small point, but it has an orientation, a field of view, and a position in space relative to the world origin. Things in camera space move in directions relative to the camera, not to the scene origin directly.


Expanding the view of the camera to include areas outside of the field of view. This data can be useful for seeing things just off screen in the viewer, or later on, when rendering it can provide additional pixels for use in certain compositing operations that might shrink the frame slightly.

Clipping Plane

Every computer model of a camera has a near, and a far, clipping plane. The plane is measured from the camera. This can be used as an optimization, to ignore everything off in the distance that is not required, or to ignore particles that are 0.001 units from the camera and would produce crappy results.


Just like in your geometry class, a vertex is a point. Infinitely small, but given a specific location in world, or camera space. With a collection of points, connections can be made. With 2 points, you have an edge. With 3 points you have 3 edges, and 1 face, making a triangle. With 4 points, you have 4 edges, and 1 face, making a square. Any higher number of faces is termed an n-gon. Verticies can store information other than their position.


Two, joined vertices produce an edge. This would be a ‘line’ in your geometry class.


Three, or more, edges are required to make a face. This is drawing a triangle, or a square, on a piece of paper, and then filling it in. This is useful for making surfaces to look at, or for calculating simulated collisions, and many other things.


Different software packages will include different primitives, but they typically consist of spheres, cones, cubes, etc. Some software vendors will include more complicated geometric constructs under their ‘primitives’ menu — fun things like teapots, and ponies.


The verb for making geometry. A particular artist might specialize in modeling. They are not voguing at work, they just make things in computers that may or may not be animated to vogue.


Non Uniform Reticulating Bezier Splines. They are fancy, vector based splines that allow for curvy edges, instead of a direct-line linking two points like a polygon mesh. The wikipedia page for them is pretty dang neat.

NURBS Control Point (Control Vertex)

This is a point in space that dictates the vector stuff for your curve.


Two control points, with tangents and stuff. Think of curves in 2D vector graphics programs like Adobe Illustrator. A curve can also be shaded to have width in world space, or relative to camera, which is how most CG hair is made.

NURBS Surface

One big square. Every complicated object is made with this one sheet of imaginary voodoo. Each sheet can be subdivided in to really tiny pieces through a process called tessellation. A NURBS Sphere is just the sheet wrapped in space with inifitely small top and bottom points. You wind up with a seam down one side of the sphere. A NURBS Cube is 6 NURBS Surfaces with all their edges meeting. More complicated shapes can be made by using booleans to cut holes in the surface (but the surface is still ‘there’ in a sense, just stenciled out of existence, so it’s a little weird). Another technique is to stitch patches of NURBS planes together to make a character, but if the seams weren’t perfect, they’d pop, just like an improperly sewn garment.

NURBS were very popular in the 90s because they allowed for very smooth surfaces to be made without having a very dense polygon count. It fell out of fashion and mostly subdivision surfaces are used.

Subdivision Surface (Sub-D)

This is like a polygon surface, but the surface can be divided in to smaller and smaller faces, and then reverted back to a base mesh. This makes it easy to work with a lightweight version of your character in your scene, and then only subdividing it for renders where the extra detail is required.


The joined, polygonal, or subdivision surface.


Depending on the specifics of a software packaged, a piece of geometry can be loaded, once, and instanced to many locations in the world. Trees, buildings, leaves, crowd animation, anything. There are usually systems to manage these instances in ways that can randomize them, or pick from a library of different sources based on certain conditions. Instancing is really good for memory optimization, since you can potentially load only a few things, but render thousands of versions of them.

Edge Loops

An edge loop is basically following an edge all the way around a polygonal mesh until it loops back on itself. draw a line around your knuckle, perpendicular to your finger, and you will have an edge loop around your knuckle. These kinds of surface features are important when you move on to deforming geometry because it can create areas that compress and expand without tearing, or crinkling.

Surface Normals (Normals)

The wikipedia explanation is kind of a pain to read. A normal is basically a vector calculated from a surface. The vector of an edge, or a face, can be changed to simulate the look of a crease, or a completely smooth, round surface. Video games rely heavily on manipulating surface normals to make low-poly surfaces look smooth, or more detailed, but often betray the illusion along the silhouette of the object, or character. Under optimal circumstances, this can be used with texture maps to bend the light on the surface inside of each face. This can give the illusion of detail that would require an enormous amount of geometry. The way that a surface normal is calculated usually means that it’s best to use polygons with only three, or four, sides.

Bind Pose

This is the default position, and properties, of your geometry before you’ve applied rigging to it. It is your muppet without the hand in inside, all perched and ready to go. A good bind pose is important because it dictates what everything looks like before you start stretching and compressing surfaces, and thus stretching and compressing any textures on those surfaces. In most cases, a human being will be standing with their arms sticking out at an angle from their torso. Arms straight to the side, or arms straight ahead, will have very severe stretching in almost any situation that is not to the side, or ahead. Having them at an angle gives you a nice middle ground, and a more natural look. Bind pose could be anything though, pugs, caterpillars, chairs, etc. It’s just whatever the most natural default is.


X, Y, and Z are already used, so who ya gonna call? UVs! Ahem. UVs are the 2D vectors along a plane used to texture objects (and other fun stuff that requires 2D surface coordinates). Because your object is in 3D space, and the textures are in 2D space, you have to either project 2D space on to your object, or unwrap your object to a 2D plane.

UV Projection

You can project UVs on to a surface mesh from your camera, from an orthographic view of the top, or the side. You can project them in a cylinder. If you have anything more complicated that a tin can, or a sphere, then you’re going to need to unwrap your surface. Imagine you have an action figure in your hand, and it’s just a thin shell, no thickness at all. Now imagine cutting in to the action figure so you can unfold it on to a piece of paper. You’ll need to bend your model in some places, which means your UV coordinates on your mesh will not be exactly lined up with where the XYZ coordinates are. This is neccesary, but moving these points too far apart means that you’re going to have something that will look stretched, or compressed, when the 2D surface is is painted and reapplied to your model. You can cut it in to a bunch of individual faces, but then you will have texture seams EVERYWHERE. Good luck matching the painted edges of each of those! Sucker!


Per-face Texture Mapping (That would really be PFTEX, let’s be real) is a technique of skipping UV layout. This means you will need to paint in 3D space or you will have seams. This is better explained in this video that has a big head, and little arms.


The invisible armature in your model that controls it. This is made of things that look like, and are referred to as, bones. They are comprised of joints.


The act of making a fake skeleton.


The person that makes the fake skeleton.


Like a vertex, but for your rig. It’s position, and rotation, will influence the mesh it is bound to. Joints are parented to one another to create ‘bones’ and when the top-level joint is rotated, or translated, all the underlying joints move with it.


You take your invisible skeleton, and your geometry, and you tell the software that the skeleton should drive that geometry.


Whenever you manipulate the points of a mesh, you are modeling, because you’ve produced a frozen object. It is what it is. When you add the element of time, it becomes deformation. If you have a lump of clay, it is a model. If you push your finger in to, it will smush as you push it, over time. You’ve deformed it. Now imagine you could do that with a computer. N-gons will deform in really funky, and unpredictable way, because the surface within the edges will recalculate as some verticies get closer, and farther away, from one another. N-gons suck. This is also why an edge loop will create the illusion of something bending smoothly, as all the parallel edges curl along a perpendicular, deforming one. Quads are really superior here, in almost every application.


These are imaginary points, like vertices, or joints, that can be parented in weird places. They can affect the surface like a joint, with weighted influence, but they often are used as the building blocks for more complicated deformers.


Tools in a software package that can manipulate surfaces, or points, in very specific — often single-purpose — ways. For example, you might have a ripple deformer, which will make a ripple through the surface, as if it was the surface of a pond and you dropped a stone in it. Many packages have deformers that function almost like mini-rigs, all prepped for you to use where you see fit, for that specific task.


Not like weight lifting. This is just a term for the amount of influence a deformer, or a joint, can have on the mesh that is effecting. If you had a joint in your human character’s upper arm, and one in their elbow, you’d want to control the amount of influence each joint exerted on the surface. When your elbow bends, it should not be moving the top of your shoulder down. Typically, weights are adjusted by ‘painting’ them with a tool in your software package. Some joints might be added along a bone, in places that are not anatomical joints in a human body, just for the purposes of weighting the mesh with more fine-tuned control.


Constraints are invisible rules that govern how different objects relate to one another inside the 3D scene. Usually they keep things ‘together’.

Point Constraint

Glueing something to a point in space. Rotation, and scale are unaffected.

Orient Constraint

This makes something line up with the rotation values of something else in world space.

Parent Constraint

This is like parenting two objects, but it doesn’t change the hierarchy of the scene, since it uses the constraint to do it.

Forward Kinematics (FK)

When you curl your finger, that’s FK. The base joint moves, the next knuckle moves, and so on. The movement is inherited according to how joints are parented. Fingers, and arms are obvious for FK.

Inverse Kinematics (IK)

This is when you push off of something. When a human walks, their feet stay planted on the ground until they are lifted off the surface. If this was FK, then the hip, knee, and ankle would all move and move the foot, it wouldn’t stay planted on the ground. Almost any rig is littered with FK/IK switches to go between which system works best.


You take a mesh, duplicated it, and deform it. Then you tell the software package which one is the source, and which one is the target. Now you’ll have a blendshape value that can be adjusted to linearly translate the points from their original location to the new location. It’s mighty-morphing technology. Because the movement is linear, inbetween shapes are often modeled. This can be for things like sculpted muscles, or lips.

Control Curves

NURBS curves that float in space around your rig. They are typically parented to specific parts of the mesh, or the joints of the rig, so that the controls move through space with the character. The curves are used as proxies for manipulating the armature. Directly manipulating the joints is something only an insane novice would think to do. Control curves give you the ability to set functional properties that can be reversed, animated, or connected to other properties through expressions, or constraints.


This is the fun part that most people think is super neat. This is where you take your character and you make it do things by assigning mathematical values that change over time. (I’m kidding, your mostly dragging those curves around, math sucks, bro.)


Frames are a unit of time. A certain number of frames results in a certain number of seconds. Everyone’s seen nerdy sites that freak out about FPS (First Person Shooters (No, I’m kidding, I mean Frames Per Second)). Most film is played back at 24 frames per second.


A frame where a value is explicitly set. With several of these frames, and values that change over time, you create animation because crap will move around from one place to the other. Wikipedia has handy graphics for this one, including one blue scribble that might give you a seizure.


The implicit behavior that occurs between two keyframes. If you keyframe a ball in one corner of a room, and then you keyframe the ball in the opposite corner of the room, the ball will linearly translate from one position to the next. You did not give it any keyframes in the middle. You can change this interpolation in every animation package with a curve editor. This is a graph of that plots the value of your keyframed attribute in Y, and the frame number in X. You can change that linear interpolation to a smooth curve, that will make the ball appear to slowly accelerate, move quickly in the middle of the room, and slowly decelerate. You didn’t add any keys for that, it’s all handled by the tangents of that curve.

Pose to Pose Animation

A style of animation where all the essential keyframes are created, resulting in the character, or object, moving from one ‘pose’ immediately to the next. The inbetween frames are often refined later, but a certain ‘snappy’ quality often remains.

Straight-Ahead Animation

Stepping through, one frame at a time, and setting a keyframe on every frame. This used to be more common in certain 2D animation styles. It is still used when specifically tracking to a 2D, recorded performance.


Max Fleischer originated it (suck it, Walt!) but some of the most memorable rotoscoped scenes are from Disney films, such as Snow White and the Seven Dwarfs (shoot). The technique requires tracing every frame recorded footage. These days it is most commonly used to describe matte generation (tracing photography with splines in a 2D compositing application). It can be applied in 3D, with a CG character’s performance being viewed from a camera, and matched to a live-action performance. This is different from motion capture, because the performance is only photographically recorded, and no 3D data for the performance is generated. Animators typically look down on this for the same reason that artists look down on tracing portraits from photos.

Walk Cycle

Just what it sounds like, a cycle of animation of a character walking. These cycles can be added to libraries and reused when needed where animating every footstep a background character might take would be tedious.


A first pass of animation where interpolation is usually disabled. This leads to just a rough overview, like an old-style animatic.


You take a bunch of storyboards, time them out, and you get a very, very, very rough animation of your movie. Temp recordings of dialog, and sometimes temp music, are added to the animatic. This can help the director see what he wants to do. It is almost exclusively used in completely animated productions these days, though it has been used for live-action movies that have animated characters.

Previsualization (Previz)

This is basically doing a rough job at the modeling, texturing, and animation, to make a not-pretty mock-up of what the visual effects, or CG characters will look like later. It’s like an animatic, but WAY more expensive. Previz is essential during some live-action shoots because it allows the director to direct his actors so they will more easily integrate with the final work done at a later time. Pretty much nothing is kept from previz except for whatever is used in editorial. This editorial cut will be used by animators to make their animation, like a very expensive animatic.

12 Principles of Animation

Again, I defer to Wikipedia on this one.


Because everything is computer perfect, and all the poses were set on nice, clean keyframes, there’s going to be twinning. When two things happen at the same time.


Some of the keys are dragged around to happen before, or after the keyframe they were originally set on. This provides overlapping action. Instead of a character raising both arms at the same time, one will move a frame or so before the other. This produces a fluid, natural range of movement without twinning.

Once all the keyframes are set, the animation is typically offset to prevent twinning. Pixar did this with their eye blinks too. If you pause a Pixar movie when a character is blinking you will see that the eyes are closing and opening on different frames.


You’ve got your Pixar-level, super-duper, awesome animation all set, but it’s typically good form to actually watch it before you tell anyone it’s good. You do this by telling your animation software to basically capture a screenshot of the view in your application for every frame, and then stitch that back together in to a movie. Certain bells and whistles will vary from application to application, but this is an essential step to evaluate animation, because almost nothing will play back in realtime inside of the viewer of your application. This is a kind of batch render.


It is very important on a live action film project to track the camera movement, and camera properties, of the real-world camera so that everything be constructed in a way that works for whatever else you plan on doing. The photographic plate needs to be unwarped (the bowing effect from the camera lens removed so you have a flat image) and different algorithmic solvers can give you a good head start on figuring out the space.

Look Development

This mostly includes texturing, but not the actual work of laying out UVs. It is primarily concerned with the application of shaders, with specifically-set material properties to your geometry.


This is a view of just the edges that make up a surface. This was commonly used in the 1980s and 1990s in graphical overlays to show how high-tech something was. It’s most useful in an interactive viewer. There are also special wireframe shaders that let you render something with backface culling and adjustable thickness to the wireframe edges.


Even in the basic preview the application’s viewer presents to you, things are shaded (often in OpenGL). Since vertices are infinitely small, something has to draw a dot there for you to visualize. Surfaces are shaded with some default material that usually resembles plastic. A shader is the code snippet that tells whatever is rendering the view how to return the pixels. When light hits a surface, at a certain vector, it will trigger some component of the code to shade the surface in a specific way.


Materials are often referred to separately from shaders. A shader is a chunk of code, and materials are basically a group of preset values to use with that shader. This distinction is often confusing, and most people think of shaders and materials as being the same thing.


Let there be light! The diffuse lookup is the matte illumination of the surface. Think of a mostly matte surface like concrete. It illuminates pretty evenly when light hits it.

Specular (Spec)

Shiny! The bright, hot highlights on a surface are specular components. This is typically a very tight lookup of the relationship between the light, the surface normal, and the camera.


Shiny? Technically, reflection is specular, and specular is reflection. Many software packages split the two, which allows for easy adjustment of one or the other. Reflection can be most easily thought of as chrome, but even human skin and concrete reflects the world around it.


Bendy! A surface normal controls how light hits a surface, but refraction controls how light transmits through a surface. Glass, water, prisms, your grandmother’s ugly candy dish, are all refractive objects. The angle the light travels through the object bends based on the index of refraction, which is a measurable property of any real world substance.

Index of Refraction (RoI)

Google around for some tables. They’re fun. The Index of Refraction is the measurement of blah blah blah. Even gold has a refractive index. Light doesn’t pass through gold, you say? Well there’s a direct relationship between reflectivity and the Index of Refraction. Physically accurate shading models will even tie these values together so that RoI can drive refraction, reflection, and fresnel.


The term refers to the rolloff exponent in a shader, but it can be tied to a physically accurate model that uses the Index of Refraction to drive the fresnel. Nonsensical fresnel effects can be used to achieve certain non-photorealistic looks, like “X-Ray” shaders, or shaders that cheat the look of hand drawn cel animation edges.


Bending surface normals with mapped, or procedurally-generated data. This allows for the illusion of internal detail on model surfaces, but the edges will still look like flat geometry edges. Bump maps are typically within a set range, and black and white. with one being up, and the other being down. This varies depending on the package. Special kinds of maps can use RGB vectors to provide more detail than the straight up and down of bumps.


This is like a bump value in a shader, but it actually deforms the surface of the mesh. If your mesh lacks sufficient polygons, you’re going to get a pretty crappy displacement. Because displacement requires a certain density of the mesh, it can be extremely computationally time consuming. The displacement must also occur to the mesh before many other actions are taken by the renderer. Some neat effects can be achieved with animated displacement, where a sequence of images might represent footprints in snow, or the surface of an ocean. Vector displacement uses RGB data to drive displacement along different vectors from the surface normal. This can allow for a more complex profile along an already displaced edge so it’s not just straight up and down.


You can make something transparent.


The surface either looks to be self-illuminating, or actually sends out rays in to the world to cast light based on the incandesce of the surface.


A bunch of wacky shader models to make metallic paints, insect wings, and pearl effects.

Subsurface Scattering (SSS)

Hold your hand up to a light. That. This does that thing. It can also be used for marble, jade, milk, gummy bears, everything in The Incredibles, and flan.

Procedural Noise

This is basically everything you see in Babylon 5. Procedural noise is used in shaders to create variation in a parameter over a surface by generating different kinds of fractals. This has certain advantages since you don’t need to worry about textures, or the resolution of your textures. Depending on the specific functions used to make the noise, it can look very regular, or large and craggy. These noises are best used in conjunction with textures, because otherwise you’ll have Babylon 5.

Hair Shaders

Specific kinds of shaders that shade along a width-less curve to give it width, and the illusion of being hair. Bells and whistles vary. There may be all kinds of weird cheats for performance, like shadow density stuff, so they are often pretty unusual.

Double-Sided Shading

Because surfaces have normals, they also have two sides. The front side (normal up) and the back side (normal down). Some non-photoreal effects can be used to assign different shaders to front and back faces, but typically, double-sided shading is desired for any object.

Backface Culling

Removing all the faces with surface normals bent away from camera — essentially this is the back of an object as seen through a camera. It is relative to a camera, and often used to optimize render times for things where the back of an object might not matter.


Something occludes something when it is in front of it. A solar eclipse is the moon occluding the earth from the sun, and casting a shadow. Your hand over your eyes is occluding literally everything because it’s covering your view. Occlusion can describe any relationship where something covers something else, even at incident angles, like a big soft light casting soft shadows.


This is taking your modeled, textured, shader-applied assets and rendering them with — guess what? — LIGHTS!

Spot Light

The most basic kind of light in any package is a spot light. It’s functions almost identically mirror those of a camera. It is an infinitely-small point, with orientation and position, as well as a cone angle. Imagine there’s a cone where the pointy end is stuck at the point of the light, and the wide end just goes on forever. Everything in that cone can get light. Often people soften the edge of the cone with different settings, like an inner cone angle and an outer cone angle, so it can fade between the two. Penumbra is term for a value that does the same thing by making the edge of the cone fuzzy. Some packages let you set a radius on the light, which basically tells the render the light is not an infinitely-small point, but a disc in the same region as the cone. This can be useful in physically accurate applications when reproducing realistic lights.

Point Light

This is like a spot light, but there’s no cone angle. Light goes out in all directions from the point. It can also have a radius, a width, in certain applications so it’s more like a lightbulb in a lamp.

Area Light

Big rectangles that send light out in a perfectly even, perpendicular direction to the rectangle. This is most often used because you can get soft shadows out of it, since the light is coming from a giant rectangle instead of a teeny-tiny dot. Imagine the overhead fluorescent lights of your least favorite highschool class.

Directional Light

This is like a cross between a spot light, but the light rays are all coming from one, uniform direction across everything in your scene. There is no cone angle, and no source. You can only manipulate the direction of the light. This is useful for simulating distant sunlight. Shadows from the sun are mostly perpendicular (don’t nitpick) where shadows from the cone of a spot light will flare out with the angle of the cone.

Light Decay

As light travels in the real world, it has less influence on things farther away. It visually decays at an almost quadratic rate from its source. Depending on the software package, this decay can be built in as an assumption of the world, or it can be totally disabled, providing infinite illumination.

Decay Regions

This is like light decay, but with near and far planes that can be adjusted. This can provide very specific falloff for a light. It’s more for art-direction than it is for realism.

Shadow Maps (Depth Map Shadows)

If you are using an older, raster renderer, then you’ll have to typically contend with a system of generating depth maps from the view of each light in your scene. The renderer then uses these depth maps to figure out where light is occluded. Soft shadows are very easily achieved by blurring the shadow map, however that uniformly blurs it. If you look around in the real world you’ll see that shadows mostly start sharp and end blurry. It all has to do with distance from the light, and the occluding object to the receiving object. You can’t do that with shadow maps. Another thing is opacity. Depth maps render all the geometry as visible. If you have glass, it will produce a solid shadow like it was plywood. This can be cheated by moving the glass object to another shadow map, and then cheating that map to be less dense, but it’s still silly-looking and there’s really no excuse for it these days.

Ray-traced Shadows

This is computationally more expensive than shadow maps, but you get way nicer shadows with more realistic-looking shadow fuzziness and sharpness. The main issue with raytracing shadows is the number of rays that are fired to get a clean result. The sharper the shadow, the fewer rays you need. The softer the shadow, the more rays you need or you pixels that are in shadow next to pixels that are not. Ray-traced shadows are also the only way to get accurate shadows for transparent, or semitransparent objects.

Ambient Occlusion

With non-ray-traced renderers, you need to approximate the look of ambient light getting all over surfaces, except for in cracks and crevices by checking the distance between two surfaces, and their normal. This produces something that looks like an overcast day. This can also be used as a utility pass in compositing, to give the illusion of a soft shadow under a character on to live-action ground.

Reflection Occlusion

Same as ambient occlusion, but with different surface normal requirements. It looks like chrome, if you printed it on an ImageWriter II.

Point Clouds

Point maps can be generated, a bunch of data in camera (light) or world space to be used to speed up certain calculations like ambient occlusion, or SSS.


A High Dynamic Range Image map that is usually read in to the software as a lat-long image (a sphere unwrapped on to a long rectangle). This is used for different global illumination effects in different packages, and the selection of the HDRI map source can make or break some work. HDRI maps can be generated on a film set with chrome spheres, or with special camera rigs.


A skydome is either an all-encompassing light source, or merely a tool of a global illumination cheat. An HDRI image is mapped on to a sphere and the sphere encompasses the world you’re rendering. This makes highlights show up in mostly the places they should, and even contributes some illumination. This is used in the place of ambient occlusion stuff in most ray-tracers.

Lighting Rig

The collection of lights grouped together in to a logical element that can be exported, and reused elsewhere. A lighting rig might even include constraints to lock the rig to piece of geometry, or a character.


Motionblur like interpolation, in that as an object is in motion, or as the camera is in motion, from one frame to the next, the object will blur according to the shutter speed of the CG camera. With too little motionblur, CG can ‘strobe’ (Hello, Michael Bay!) Different rendering solutions exist for 2D motionblur, where motionblur is calculated according to the vectors an object is moving on, or with 3D motionblur, where the geometry is sampled over time on the ‘subframes’ in motion creating a fuzzy haze of movement. The latter is more accurate, but time consuming. Motion vectors can also be generated as a separate render pass to be applied in comp by particular plugins.

Raster Renderer

A raster renderer only considers the pixel it’s looking at right this second, no rays are fired to figure out where realistic reflections, or shadows are. This makes these sorts of renderers super-duper fast. The catch is that if you want something to look photoreal, you need to do a lot of set up work to make sure you have just the right balance of shadow density, shadow blur, reflection maps that make things look like chrome, and all kinds of stuff. This is a really big downside. Pixar’s PRman started life out like this.

Ray Tracing Renderer

Mostly this consists of firing a ray from the plane of the camera’s image, per each pixel, in to the world. The ray hits something, and based on the surface properties determines if it sees a light, then it figures out how to shade and return that pixel value. This happens a whole lot, with multiple samples being taken to smooth out sampling noise. Sampling noise arises because you’re dicing up a whole world with tiny details in to a finite number of pixels. Ray-tracing is really in right now because it produces a lot of really neat effects that were time consuming to set up cheats for in raster renderers, and even hybrid renderers. Examples include: Arnold, Mental Ray, V-Ray, and many others.

Hybrid Renderers

Pixar’s PRman has been a hybrid renderer for a really long time now. It can do some operations as raytracing renders, and some operations as raster renders. It’s not very good at the ray-tracing, but they are improving it. There still aren’t a lot of things you get for free, versus having to set up yourself, but it’s getting better.

Lighting Passes

Under many conditions, it’s considered optimal to produce several different renders that can be combined, or manipulated, in the comp. Some times a lone light might be split out from the rest if it needs to have its intensity animated, or the refraction might be split out from some undulating cloaking effect, or utility renders (fresnel, or depth maps, or occlusion renders) will be produced to modulate certain things. Depending on the renderer, some of these passes can be produced from the same render pass. As the rendering engine processes the frame, it keeps all the data to split out all the extra info as separate images. It’s a handy trick, but it’s also done manually.

FX (Simulation)

Anything that explodes, vaporizes, snows, rains, catches fire, splashes, swirls, smokes, or swarms. This is an all-encompassing term for simulating complicated interactions that would be too difficult, or impossible, to implement by manually keyframing each of the many, many elements that interact.


Particles are like vertices. They are usually ‘birthed’, or ‘emitted’ from a source. Either a point in space, or along a surface. Particles have no visible component unless a shader is assigned, or geometry is constrained, or instanced to them. This can be 2D planes that face the camera, but move with the particle, called ‘sprites’ or it can be full 3D models that tumble through space, like rocky debris. Particle emission systems often have complex ways to assign random values to the particles so that each might have a different spin, or density.

Fluid Sim

Lots of things are fluids, smoke, fire, plasma, water — well, maybe you epxected water. The properties of the simulation, gravity, viscosity, all kinds of stuff dictate what we perceive the fluid simulation to be. It is shaded accordingly.


Minecraft! No, not really, I’m talking about volume data to render puffy clouds and stuff.

Rigid Body Collision

When two boxes hit each other.

Soft Body Collision

When two Jell-O Jigglers hit each other.

Cloth Sim

I’m including this here, but typically the people that make the clothes, and the people that blow things up, work separately. Cloth is applied in bind pose, as the character moves around, the cloth moves against the character. Improper settings on cloth can make silk look like kevlar, and vice versa.


The collisions, and simulated liquids, all need to be solved for, for every frame, starting with the first frame. Each new frame building off the previous simulated result. You can’t hopscotch around the timeline with this, you have to do it in order. The resulting cache is typically saved to a file and read back in to the software package as read-only before it is rendered by lighters.


So you have your superhero in a cape jumping over explosions in the rain, what do you do now? You composite it all together! Integration is the name of the game, you need to make sure that all the elements that have been received from lighting can fit together and make the pretty thing.

The Comp

THE comp is referring to the specific file that contains all of the compositing operations being used, and where all the work is being done.


This is like a comp, but before it. You put together some elements that will feed in to the comp to make the comp lighter and more responsive to work with.


Not a Prodigy song. This is a first pass comp where all the bits and pieces are slapped together without much care given. It’s useful for things like, “Why are we missing half our stuff?”

Node-Based Compositing

There are two kinds of compositers in this world: Those that composite in a node-based compositing application, and those that are wrong. Essential to the art of modular, reusable, easily organized, functional work is a node graph, a 2D plane where a bunch of nodes (boxes) are laid out. The connection of these boxes holds significance (this node sends data to this other node). Certain connections are impossible to make in a node graph. For example: You can not take the output of your node and connect it to your input. That is stupid, and also impossible. Likewise, any connection that would result in the output eventually connecting to the input is also impossible. A key feature of a any node-based software is the ability to ‘view’ the result of a node, and the nodes connected above it, from any point in the node graph. This is how you can find out which color correction node is making everything go cyan.

Layer-Based Compositing

This is AfterEffects. It’s just like Photoshop, but with a weird timeline, and a bunch of stuff that stacks in an order you don’t like. It is more common in television production.


The photographic frames.

Pull a Key

This is different from a keyframe. This is keying hue, saturation, and, or value from a photographic plate to produce an alpha channel. This is your greenscreen or bluescreen work. You want to get a ‘clean key’ so that the background can be replaced with your fancy CG ice cream parlor, or shark tank.


Matte can refer to either the alpha channel of what you are keeping, or the alpha channel for what you are using to remove things. A person with a hole in the live-action actor’s head might say, “There is a hole in the matte.” A supervisor asking for a fern to be removed might say, “Matte that fern out.”

Garbage Matte

This is a quick and dirty roto, or other matte, that basically ammounts to a blob, or box. It is used to contain, or screen out things that do not require finesse of intricate roto work.


When the color of a greenscreen or bluescreen affects the photography you want to keep, (like green light bouncing on to an actor’s face) then it is referred to as spill. Even if a clean key can be achieved, there will still be green on the actor’s face.

Spill Suppression

Neutralizing the spill that is contaminating the photographic element you are keeping. This can amount to different color replacement techniques, or desaturating that specific hue.


Text burned in to the bottom of the screen. Like the X-Files.


The art of frustration. You put a crosshair thingy on a clearly readable feature in the plate, and you push track, and it immediately fails. Just kidding, that never happens. You track a feature of a plate to either add an element to the plate that needs to move with that element, or to stabilize the plate.


Sometimes a director wants to take camera movement out of a scene. Tracked coordinates are used to negate that movement and stabilize it to the position from a particular frame. The camera will still probably jiggle around a little, but what are you going to do? You’re not a magician.


You add, or remove, frames to speed up, or slow down, a plate. There are many tools to retime things, the simplest being dropping frames or doubling them up. Another prodcedure is taking the frame before, or after, and synthesizing a new frame, which is usually mushy, and gross.

3:2 Pulldown

This is a dumb artifact of the NTSC broadcast standard. Film is 24 FPS, and NTSC broadcast is 30 FPS interlaced. All modern media can jump between whatever speed is required, but many media libraries are chock full of things that have the pulldown baked in. You’ll notice a certain stuttery, shitty quality to old movies being rebroadcast. It is a form of retiming.


The act of making everything worse by dividing up images in to staggered fields and stitching them back together in horrible ways that look like crap. Some video sources are captured in an interlaced format and they are absolute nightmares to work with in compositing because of those fields. Even de-interlacing plugins aren’t going to magically fix it. Always shoot your home video progressive, and never interlaced.

Stereo Compositing

Anything that produces multiple views of the same frame. This is typically ‘left’ and this other thing called ‘right’. Most modern compositing packages pass down both views through all the same nodes. Particular exceptions will need to made to offset things for certain eyes. The views are put out to disk and combined during playback to give the illusion of stereo. Things can be cheated in stereo space by transforming them, left or right, to increase, or decrease, the offset of the object, and thus, its relative position in space. You can’t add volume to an object that way, you’re just moving it closer or farther.


Real people don’t store their data in sRGB web jpegs. Light is captured in a mostly linear-float way. The human eye works in a mostly logarithmic way. We perceive differences darker values more than we perceive them in lighter values. It can be argued, that it can save space to store things in log. Unfortunately, you don’t want to do any compositing operations in log space because it’s all clamped and weird. What you want to do is work in linear floating space and only store the files in log, if needed. A lot of tools exist to manage storing, and viewing, pixel data from and to various colorspaces, but OpenColorIO is the head honcho. Maybe some day web developers will care about color accuracy? Ha.

Lookup Table (LUT)

A way to go between colorspaces either for viewing or storage.

Digital Intermediary (DI)

This used to refer to just sending to stuff to the post house that would handle color grading for the film. Now it is synonymous with terms color grading.

Color Grading

Everything that normalizes colors between different shots, adjusts the contrast, adds warmth, coolness, or messes up everything you worked so hard on. I’m kidding! It’s just a little joke.


Editors cut together all the shots in the film with non-linear editing software.

Non-Linear Editing (NLE)

Everyone’s used iMovie, right? That’s a really shitty, horrible, mess of a non-linear editor. More popular versions are Final Cut, Avid, and Premiere. NLE packages often include bells and whistles to do certain quick compositing tasks, but they should not really be considered compositing tools. Their primary purpose is slip and slide shots around on timelines.


A shot is the smallest building block of your edit. It’s the set of frames between one cut and the next.


A bunch of related shots. How related they are is up to the editor and director, but typically it’s stuff that’s in the same location, at the same time.


This is different from compositing, this refers to how things are arranged in screen space. Where the character’s are in relation to the camera, the effect of a specific kind of lens used, and how things move through the frame. The impact of composition is obvious when many shots are cut together because the brain stores information about what was just on screen. This can be used to create a comfortable conversation on screen, or a slasher flick.

Establishing Shot

Typically the first shot in a movie, or a change of location, that establishes where things are. We see the Death Star, and TIE fighters whizz by to establish that we’re going to be spending some time with the Death Star.

Same-As Shot

A shot that is very similar to another. In visual effects, and animation, elements might be reused from scene to scene, like backgrounds, or lighting rigs.

Wide Shot

A wide-angle lens is used. This can show “more” but space can feel unnatural the wider and wider you go.

Long Shot

A shot from really far away, usually with a wide-angle lens, often an establishing shot.

Medium Shot

Typically of a human subject, and it includes most of the human subject in the frame. This is useful for showing where characters are sitting or standing when they are talking to each other.

Three-Quarter Shot

A view of the upper 3/4 of a person. Torso, arms.

Cose-Up Shot

Tight framing on the subject of the shot. Typically a human face, but it could be a close-up of a button, or trigger that is important to the events occurring in the scene.

Extreme Close-Up Shot

All up in your grille.

Over-The-Shoulder Shot

Usually used for exciting cafe scenes. The camera is perched over the shoulder of one person in the conversation, and aimed at the the other person across from them. Typically this is paired with the reverse view of a camera over the other person’s shoulder.

Shot Reverse Shot

A view of one character, intercut with the view of another character. We assume the two are talking to one another.

180-Degree Rule

Wikipedia. The rule can be broken to purposefully achieve certain effects on the audience. That is not an excuse to ignore it because you’re an art student and you think ‘rules’ are dumb. YOLO.

30-Degree Rule

This is a guideline for how far the camera should move when cutting on the same subject. It’s a little hard to conceptually understand, but if you don’t move the camera around, and just push in or out on your cut, you’ll wind up with something that distracts the audience (a jump cut).

Rule of Thirds

Your iPhone has this built in. A grid of lines produced by dividing the width of the screen, and the height of the screen, by three. The eye typically focuses on elements resting on those lines, and particularly on elements resting on the intersection of those lines. However, this is a guide and should not be taken literally.

Dutch Angle

Tilting the camera so it isn’t aligned with the ground. This can give a fun-house effect. It should be used sparingly, to intentionally produce a jarring, or unsettling result. It’s from German Expressionist directors, Deutsch. Americans just call them ‘Dutch’ because: America.


How you get from one shot to another. This could be a cut, it could be dissolve, it could be fancy matted out objects overlapping in to the next shot.


A camera cut is when footage is ends, and new footage begins. There are many different kinds of cuts.

Jump Cut

Almost exclusively used in horror films, this is basically removing a chunk of time from within a shot. The camera appears to ‘jump’ from one position to the next. It is unsettling, hence the association with horror, and not 27 Dresses.

Match Cut

Cutting between two different shots, but the two shots have elements that graphically match between them. Prime example of this is 2001: A Space Odyssey when the femur is tossed in to the air and it cuts to the a nuclear weapons satellite in the exact same position in screen space. (One of my favorites is from The Fall, keep an eye out for the face and landscape in the trailer.)

Cutting on Action

In one shot, a character’s arm reaches forward, we cut, and the next shot we see a close-up of that hand grabbing an ice cream cone. Delicious, artisan-crafted, action-packed ice cream. The movement is continuous and impactful even though the cut occurred in the middle of it and the two shots could have been filmed at different times, and even with a stunt-hand. You mostly see this in action scenes, as characters flail around trying to land punches.

Fast Cutting

The speed of the cutting can affect the perceived passage of time. When camera cuts come fast and furious, things feel like they are happening very quickly. This is because your eyeballs are hit with new information in rapid succession. Action scenes, explosions, all that stuff that needs frenetic, chaotic energy. Use sparingly to achieve the desired effect, use excessively to look like Michael Bay.

Slow Cutting

You’ll never guess what this one is.

Long Take

A shot that goes on for a really long time. That sounds pretty arbitrary, but it’s relative to the length of the other cuts in the film. If you have some big opening shot where you’re touring Serenity with a steadicam, then you have a long take. “Hey, look at me! I did something hard!” Is typically what the director wants the viewer to know by using this shot. Another, example that you either love or hate is the opening part of Gravity.


Going back and forth between one scene, and another. This can create tension by weaving together several events. The Battle of Endor at the end of Return of the Jedi is cross-cut between the Emperor’s throne room, the rebels on the surface, and the rebel fleet (there are smaller units to how that breaks down, but you get the point). The technique is used extensively in The Fifth Element often to comedic effect. Something is revealed to the audience by a character in one scene, as the character in another scene discovers the facts themselves.


One shot gradually turns in to another shot.


Have you seen a George Lucas movie? All those. Horizontal, diagonal, vertical, radial (shudder), everything where graphical element wipes the frame. More subtle examples are when an object, or person, in the scene moves past the camera and they are part of the wipe. You’ll see this in crowd scenes. Some headless bozo walks right to left in front of the lens and on the right, behind the bozo is the next shot.

Iris In/Out

A cheesy circle opens or closes to reveal the next shot. This is a kind of wipe.


We’re gonna need a montage.


Sometimes, a shot will only be a set number of frames, but it will have a few frames before and after the shot range just in case the editor, or director wants to ‘open up’ the shot and make a hair longer. Handles are not a requirement.

Demo Reel

A software vendor, visual effects company, animation studio, or artist will stitch together some shots they feel are really good. About one to three minutes in length, it shows bits and pieces of the work either to submit for an award, or more commonly, to secure future work.

Reel Breakdown

All the shots from the reel get a short explanation, usually in a spreadsheet. What software was used, what roles were involved, artist names, any potentially relevant information.

2014-08-11 23:19:17

Category: text