Unauthoritative Pronouncements

Subscribe About

Using iOS for Eeeeeeevil

I felt like being annoying (what’s new), and I decided to take advantage of an ASCII twitter meme from last… uh, summer? Matt Alexander had harnessed this meme for the patented “Super Fave” already, but I needed to make it even more ridiculous. Easy to do with a simple text substitution. Just replace all the same characters with a different character. Simple find and replace stuff. You can do this on your iPhone or iPad so easily. If you don’t have Pythonista, you should, because: PYTHON WORKFLOWS. It seems so intimidating to “program” something if you don’t do it often, but the function is literally called replace and you give it the text to replace, and the text to replace it with. 'this string'.replace([original],[new]) That’s not fucking Scheme, it’s not Objective-C, there are no End Tells. That means you can bug the shit out of people without breaking a sweat!

Start with whatever ASCII meme you want to butcher, like a “Super Fave” if it’s on multiple lines, use three quotation marks in front of, and behind it. This lets Python know it should respect all the lines of text.

superf = u"""
☆。 ★。 ☆  ★
 。☆ 。☆。☆ 
★。\|/。★   
  SUPERFAVE
★。/|\。★ 
 。 ☆。☆。☆ 
☆。 ★。 ☆  ★
"""

Easy, right? The “u” at the start of the quote lets it know you’re using fancy, unicode characters. No big deal really, you just put a “u”.

Now that you’ve assigned the text to a variable, it’s super easy to manipulate it. Assign some other variables for the text you want to replace. like

a = u'💩'
b = u'🌹'

Boring, I know. Now let’s do the replacement:

r = superf.replace(u'★',a).replace(u'☆',b)
print(r)

Notice that I assigned it to a variable again so I could reuse this. I’m also chaining the two replace functions to each other. You could put every one of these things together without defining any variables, but it’ll just look like a train-wreck, so skip it.

If you want to make this an interactive application, there’s a feature in Python called raw_input() where you can let the user enter text at run time. Inside the parentheses, you can specify text to put at the prompt line to give people some kind of instruction. Like so:

a = raw_input(u'★  = ')
b = raw_input(u'☆  = ')

There are also some fluffy things I added to the script that are Pythonista niceties, and not things that work on the desktop. Pythonista helpfully includes a console module to control the way text is printed out when the script runs. I want to set it to look like Helvetica Neue so it’ll match closely to watch iOS Twitter clients will display. Naturally, these are not monospaced fonts, so you’re going to get variable-widths to the characters that will deform the overall shape of the text, to some degree. The clipboard module just helps to copy the text output right to the clipboard when it runs. No need to do it yourself. We live in The Future. You can run without any of those extra things just fine on your desktop.

I initially sent this to two, very responsible people on Twitter. I admonished them not to use it for evil. Then I used it for evil, like, uh, 20 times since then.

Go forth, my minions, and annoy the shit out of everyone!

2014-05-28 22:34:00

Category: text


Really Senescent Syndication

The other morning, I was neurotically reading all the tweets I missed while I slept, and I came across a mini tweet-storm from Federico Vittici of MacStories.net and The Prompt fame. He was expressing frustration with elements of his posts not being appropriately captured, in the RSS feed, and presented in feed readers. This, of course, isn’t really the fault of the feed readers, they are conforming to a specification. It’s the fault of the positively antediluvian specifications for RSS 2.0 and Atom 1.0. Yes, that is 2003, and 2005, respectively. This is in no way, shape, or form keeping current with what “The Web” is these days, what is in a web page, these days. I joked about in the previous post, the W3C Atom validation widget said that <figcaption> was not a valid HTML tag. Ahem.

The modern web uses <script> tags for everything. There is JavaScript all around you (like roaches) and it does all kinds of neat tricks on the pages you visit. Sure, there’s the scary, ad-tracking, privacy-invading kind, but there’s also stuff like pretty animations, and swell graphs. Federico wanted to return to truncating his feed for this very reason. He went through so much trouble putting together his site, and the features, that it was an anathema to him to have it neutered by the requirements of feeds.

Days before that, Casey Liss (Who the hell is Casey?) complained on Twitter about the bits and pieces of tags that feeders wouldn’t render either, of course it was also the <script> tag.

Why are things this way? How can this specification be so criminally neglected, and misunderstood? I posit that it’s precisely because of “Web 2.0”, and more recent, events. Small startups with publish/subscribe features appeared to push and display content for us. Things like Blogger, and Google Reader fell out of fashion for things like Facebook and Tumblr. After all, RSS and Atom are files, not protocols for synchronizing data. They have no social features whatsoever. You can’t click a button in your RSS reader of choice and the feed owner gets a ‘like’ back. That’s the kind of vanity people expect these days. It’s better to be in a walled garden than to be a wallflower. While Twitter and Tumblr initially supported exporting flattened data to RSS feeds, the use of the features declined because users could not interact with the services. Then services started to phase out RSS (Tumblr still has some, limited RSS support).

The whole reason I moved off of Tumblr was because they were truncating posts in my feed. Not just a little truncated, mind you, but there usually wasn’t even a full sentence in the entry in the feed. When I contacted their support, they plainly told me they had no interest in doing anything about it. I can see why they wouldn’t, advertising revenue comes from the dashboard, so everyone should read Tumblr blogs on the dashboard. Feedpress even ran in to RSS support problems when Tumblr quietly nixed the ability to redirect to another feed. This particular frog was starting to feel the water boil.

Dining on Ashes

Much like an old house, there are layers that have been patched, ripped off, gutted, plastered, drywalled, and rewired. RSS really started as the project of Dave Winer (a person that has managed to stumble backwards in to nearly every controversy you can imagine) when he was at the company UserLand. Netscape co-opted his work with some custom headers and called it RSS. Dave and Netscape went back and forth about what to do with it for a while. Eventually, RSS 2.0 was released. Then, that was it.

Atom was a project started to fix quirks of RSS that had arisen because of its percolating development. It was going to be the new, hot thing. No restrictions to the past, a clean break and a path to the future!

Only problem is that no one has touched either in aeons. Indeed, their web sites are the very model of inactive development. A part of the website for RSS, hosted by Harvard Law, boasts how it was converted to static files so it was easier to keep serving the files forever. I mean, that’s nice and all, but shouldn’t that be a red flag that nothing is happening?

Alpha Google and Omega Google

Way back at Google’s inception, they used to let engineers run amok and make novel things. One of those things turned out to be Google Reader. It used the Atom format internally, converting incoming RSS to Atom, and using private APIs to manage things. People tapped in to those over time. When Google killed it, there was a great cambrian explosion where people took their feeds off to other services. Many bloggers hailed this is as renaissance for feeds. How exciting to think of managing feeds! (I guess?)

I largely ignored all of this because I didn’t like stripped-down formatting in feeds, and unread badges. I was more than happy to just go to each bookmark, look for new stuff, and read it on the site. Later, mixing links from Twitter in to the process. Indeed, to this day, I read most of the material I see online from links that other people post on Twitter.

Where the hell is Donatello?

Where’s the fucking renaissance? I see none of that early exuberance yielding any advancement of any format for streams of syndicated content. Whatever classification you want to assign to it, RSS, Atom — anything else — nothing is happening. What I do see, and hear on podcasts, are people complaining about improperly formatted XML that doesn’t conform to 2003 and 2005 specifications. Indeed today I listened to Myke Hurley’s excellent CMD+Space with Padraig Kennedy and Oisin Prendiville of Supertop. Padraig and Oisin lamented that the best influence on podcast feeds was iTunes guaranteeing some lowest common denominator of conformity. That is hardly a ringing endorsement of Apple’s influence on the market, but it is a realistic assessment that without Google, it’s basically just Apple’s managed feeds of audio holding on a line in the sand on bare minimum.

And yet, there are still services built to synchronize these outmoded, meticulously-manicured menageries in perpetuity. There is no life left in these formats. They are operating on support systems pumping social-air in to them, carrying read-waste away.

These standards are no more. They have ceased to be.

United Federation of Something–Something

From the above description, the angry, pitchfork-equipped villager might incorrectly infer that the onus is on a singular party to step forward to fix all this mess. That some stranger will wander in to town, the ‘sheriff’ badge shall be affixed to him, or her, and they will right the wrongs. That won’t solve anything, then you just have another Twitter, or another Facebook. Even when the goals are lofty, like they were for Diaspora (Oh look at me, remembering things) or for App Dot Net. There needs to be a body of people interested in moving things forward for many players in the market. Singular sources of propulsion are simply not going to get anyone anywhere. We need federated protocols and services (it’s just as exciting as it sounds!) where no singular entity can absorb the entirety of the market (Google Reader, iTunes), but not so Linuxy that they can’t get anything accomplished. That is easier said than done, however, because people are horrible. Look at you! You’re already thinking this won’t work so don’t judge me for also thinking this won’t work.

A Serious Sketch

Moving on with A Serious Sketch, or an ASS, let’s break out a few requirements that we may feel are obvious these days but were not obvious when these specifications were engineered.

  1. Data for a reader is synchronized across all of a reader’s devices.
  2. Updates from writers/podcasters/artists can be pushed incrementally, rather than in one, monolithic feed refresh. People could still host static feeds that get polled for changes, but the option to push changes would benefit many. Think about push and non-push email on your phone. Both can co-exist, one is for serious business.
  3. Anonymous reader statistics can be part of a platform because the reality is that it is important to know, at the very minimum, reader counts for ad-supported content. (And vanity.)
    1. Update: Dan Benjamin had a tweet about RSS after this originally posted, stating that unique subscriber counts were important. I was speaking broadly above, but there’d need to be some very specific metrics provided in order for any system to not only be useful, but also uniform.
  4. Social features, like assigning some gratitude-based star, or heart, widget to incrementally praise someone and to show other potential readers that they might also find the content interesting based solely on the amount of aggregated interest in it by other readers. (And vanity.)
  5. Real support for every HTML tag, CSS selector, and every feature. Sure, there will always be room for people to use mobilizers, to make certain sites legible, but we should not start from the assumption that no one should be given the chance to show a full, “web” experience.
  6. No more handwritten XML. Make serious, fucking tools for generating feeds. No ambiguity about what a field is for, or about which optional tags can be used. Atom is a decent start in this regard but there is no “make an Atom feed” kit.
  7. Break the “feed” up in to multiple files by default. No more singular file feeds that will hit a 150 kb wall of sadness. The web exists as standalone XML (HTML == XML) there is no reason XML can’t be served, as needed, on demand.
  8. Do not rely on timestamps to update the content! Especially if content will be rich, and not the plainest of plain text. We have so many diverse mechanisms to tell if a file is different from another file, or for a file to report it is different by sharing very minimal amounts of data. There is no reason to rely on someone updating a totally optional field, or expecting a reader’s client to go above and beyond the expectation of the specification.
  9. Real, functional, individualized password protection. There are some loopy, buggy ways to password protect feeds right now. This has certainly played a part in shaping content on the internet to be primarily ad-driven instead of pay-wall. If the feed is in files, then it’s even easier to allow incremental access, or to allow other kinds of access. Think about Dropbox shared URLs that get generated, and revoked, at a user’s whim. No one can really host a feed through Dropbox though. Even if you could you wouldn’t have any authentication portal that RSS readers, or podcast clients, would recognize. You’d need to have some way to say, “Hey, here’s some fucking password protected shit.”

See that stuff? It’s not crazy stuff! The word “computer” in my degree is followed by “animation”, not “science” or “engineering”. Surely someone with actual brains could do reason things through better than I. Think of all the services that we use today that do that stuff! Think about the fact that when you read a RSS, or Atom, feed you’re basically doing one of the most archaic things you can do with digital devices. Even email support for rich content is 500,000x ahead of what your feed reader of choice can do, and that’s email. You might as well subscribe to newsletters! You’d see feature development occur at a faster pace. (Anything’s faster than 0 though.)

2014-05-28 00:52:00

Category: text


If Wishes Were Yaks

Last night, I had a big post about Yak Barber, and how it was to a point that was functional-enough to post about it. However, after I had posted it, I kept fiddling with it anyway. My milestone of ‘well the Atom feed is broken’ wasn’t good enough for myself. I ended up staying up several more hours trying to do the thing I said I would wait to do. Even after that, I couldn’t fall asleep immediately because I was thinking of all the spaghetti code I want to yank out (yak out?).

Self Critique

Importing the settings from the settings.py file relies on the imp module. This is so I can have a command line argument to specify a different location. If I just hardcoded a path, then it would work, but you’d always use the same path. Python will gladly import the settings.py file. It’ll also gladly complain that it’s not really how that’s supposed to work. So kudos for whining.

The settings that do get read in are then piped to local variables. This is is in the event that I need to override something locally with future command line args. It will also throw a warning if these required variables are missing.

root = settings.root
webRoot = settings.webRoot
contentDir = settings.contentDir
templateDir = settings.templateDir
outputDir = settings.outputDir
sitename = settings.sitename
author = settings.author
md = settings.md
postsPerPage = settings.postsPerPage
typekitId = settings.typekitId

Then, because of the way python is structured, the main function is called. This triggers some steps to make sure directories are there, and if not, create them. It in turn triggers start() which handles the broad strokes of processing the pages and moving files by delegating out to other functions. Nothing bad here, really.

Then, unfortunately, you get to the part that renders the pages:

def renderPost(post, posts):
  metadata = {}
  for k, v in post[0].iteritems():
    metadata[k] = v[0]
  metadata[u'content'] = post[1]
  metadata[u'sitename'] = sitename
  metadata[u'webRoot'] = webRoot
  metadata[u'author'] = author
  metadata[u'typekitId'] = typekitId
  postName = removePunctuation(metadata[u'title'])
  postName = metadata[u'date'].split(' ')[0] + '-' + postName.replace(u' ',u'-')
  postName = u'-'.join(postName.split('-'))
  postFileName = outputDir + postName + '.html'
  metadata[u'postURL'] = webRoot + postName + '.html'
  metadata[u'title'] = unicode(smartypants.smartypants(metadata[u'title']))
  with open(templateDir + u'/post-content.html','r','utf-8') as f:
    postContentTemplate = f.read()
    postContent = pystache.render(postContentTemplate,metadata,decode_errors='ignore')
    metadata['post-content'] = postContent
  with open(templateDir + u'/post-page.html','r','utf-8') as f:
    postPageTemplate = f.read()
    postPageResult = pystache.render(postPageTemplate,metadata,decode_errors='ignore')
  with open(postFileName,'w','utf-8') as f:
    f.write(postPageResult)
  return posts.append(metadata)

The markdown module generates a Python dictionary from the MMD-style metadata. That’s just the crap at the top of the file with colons. First empty line, or line without colons, seperates the metadata from your text in your document. I’m overriding several values in the dictionary with the variables we’ve pulled from settings.py.

If you think that’s not so bad, just wait until you see the the part that makes index pages and the atom feed! It’s like opening a wall in your house and finding black mold. Oh boy!

def feed(posts):
  feedDict = posts[0]
  entryList = str()
  feedDict['gen-time'] = datetime.datetime.utcnow().isoformat('T') + 'Z'
  with open(templateDir + u'/atom.xml','r','utf-8') as f:
    atomTemplate = f.read()
  with open(templateDir + u'/atom-entry.xml','r','utf-8') as f:
    atomEntryTemplate = f.read()
  for e,p in enumerate(posts):
    p[u'date'] = RFC3339Convert(p[u'date'])
    p[u'content'] = extractTags(p[u'content'],'script')
    p[u'content'] = extractTags(p[u'content'],'object')
    p[u'content'] = extractTags(p[u'content'],'iframe')
    if e < 100:
      atomEntryResult = pystache.render(atomEntryTemplate,p)
      entryList += atomEntryResult
  feedDict['atom-entry'] = entryList
  feedResult = pystache.render(atomTemplate,feedDict,string_encode='utf-8')
  with open(outputDir + 'feed','w','utf-8') as f:
    f.write(feedResult)

def paginatedIndex(posts):
  indexList = sorted(posts,key=lambda k: k[u'date'])[::-1]
  feed(indexList)
  postList = []
  for i in indexList:
    postList.append(i['post-content'])
  indexOfPosts = splitEvery(postsPerPage,indexList)
  with open(templateDir + u'/index.html','r','utf-8') as f:
    indexTemplate = f.read()
  indexDict = {}
  indexDict[u'sitename'] = sitename
  indexDict[u'typekitId'] = typekitId
  for e,p in enumerate(indexOfPosts):
    indexDict['post-content'] = p
    print e
    for x in p:
      print x['title']
    if e == 0:
      fileName = u'index.html'
      if len(indexList) > postsPerPage:
        indexDict[u'previous'] = webRoot + u'index2.html'
    else:
      fileName = u'index' + str(e+1) + u'.html'
      if e == 1:
        indexDict[u'next'] = webRoot + u'index.html'
        indexDict[u'previous']  = webRoot + u'index' + str(e+2) + u'.html'
      else:
        indexDict[u'previous'] = webRoot + u'index' + str(e+2) + u'.html'
        if e < len(indexList):
          indexDict[u'next'] = webRoot + u'index' + str(e-1) + u'.html'
    indexPageResult = pystache.render(indexTemplate,indexDict)
    with open(outputDir + fileName,'w','utf-8') as f:
      f.write(indexPageResult)

You’ll notice that there’s a lot of repeated keys and values from the renderPosts() function. In fact, it even starts with the output the previous function generated. The way that the Python version of Mustache was handling lists in the dictionary resulted in an Atom XML file with Python list syntax being wedged between all the XML tags. That wasn’t cool. It needed to be looped over separately in just the right way to make a dictionary pystache wouldn’t barf all over. Because it needs to have things handled in a special way, and because the index pagination needs to happen in a special way, I have to jump through these hoops to make these separate, looped dictionaries just to make the “simple” template engine happy. That is a lot of code duplication in there setting dictionary keys.

You’ll also notice the feed has date string shenanigans in order to meet the UTC (Greenwich Mean Time) requirement of Atom, the tantalizingly named RFC-3339 specification. Turns out, Python doesn’t really have a singular module that does this. So I get to use time, datetime, and pytz to parse the date and time, convert it to 2 different formats, stick an origin timezone on it, and feed it to a third party library that actually understands what timezones are. To say that this is a deficiency of Python would be an understatement.

def RFC3339Convert(timeString):
  strip = time.strptime(timeString, '%Y-%m-%d %H:%M:%S')
  dt = datetime.datetime.fromtimestamp(time.mktime(strip))
  pacific = pytz.timezone('US/Pacific')
  ndt = dt.replace(tzinfo=pacific)
  utc = pytz.utc
  return ndt.astimezone(utc).isoformat().split('+')[0] + 'Z'

But don’t worry, that’s not the only thing the W3C Feed Validation Widget wanted to yell at me for! It is also considered “bad” to have <script>,<object>, and <iframe> tags in your XML, even if they’re inside of the part that’s just HTML. This means things like YouTube embeds and Twitter’s embedded tweets need to be sanitized. It is certainly, without a doubt, not even worthwhile to use embedded tweets in the future.

After spelunking through StackOverflow all night for time problems, I got to go back and look for the best way to remove tags. It is not considered wise to use regex on XML/HTML to remove tags. Fine, eggheads, what do you recommend? Enter BeautifulSoup.

def extractTags(html,tag):
  soup = BeautifulSoup.BeautifulSoup(html)
  to_extract = soup.findAll(tag)
  for item in to_extract:
    item.extract()
  return unicode(soup)

This explains all those entries in feed where the content key kept getting scribbled over. That isn’t the right way to do that. I should feed a list of tags to remove to the extractTags() function and I should write to a new key on the dictionary. If I did that, then the information could co-exist with the original content value and I could make it part of renderPosts().

Basically, there should be one, singular dictionary, or Class object, that carries all the information a post. There should be one, singular dictionary, or Class object, that carries all of the posts together. Then all the functions should yank data from that one tree of data instead of generating what is, essentially, identical stuff.

In terms of benchmarks:


1939885 function calls (1925524 primitive calls) in 2.171 seconds
1939758 function calls (1925400 primitive calls) in 3.014 seconds
1939758 function calls (1925400 primitive calls) in 2.663 seconds

Processing 122 discrete text files in 2-3 seconds doesn’t strike me as especially bad. The biggest offender is — surprise — Python’s regex substitution. A teeny-tiny part of the code that removes characters from the titles to generate filenames and URL’s. Oh regex! (arms akimbo)

I should have just gone to bed.

2014-05-21 15:00:00

Category: text


I'm Basically Made of Idle Threats

I have complained an awful lot about Tumblr on Tumblr. You know, because I can. I have been doing things on the side to try and get a system together that I am happy with so I can transition on to my own, self-hosted solution. This is a point of pride, not of practicality.

I had Sid yelling about how great Statamic is, Casey wrote his own blogging engine, Jerph kept talking about how great Jekyll is, several recommendations for Pelican came through… but none of them are a perfect fit for tiny, nitpicky reasons.

The simpler the system is, the less immediate control I have over the system. The level of abstraction in the code is too dense for me to flip some bits and get exactly what I want without committing time to learning some very specific things. Much of this has to do with styling. I want something very minimal, but even the most minimal templates have all kinds of options buried in them.

This is also why I have started, and stopped, writing something like 10-12 blogging engines over the last 3 years. I haven’t launched anything on any of the platforms because they were all fundamentally flawed. I was trying to do too much all at once to make any of it work. Like a Dropbox API link to pull down files and convert them using my own templating system in Pythonista —an iOS app. That was pretty nutty.

The current attempt was inspired by Casey Liss’ Camel engine —to a degree— but more about the fact that he rolled his own, than anything else. The kinds of organizational changes I wanted to make to the files in his system were not ones he’d want to pull down because this is his engine for him, and I needed to make my engine for me.

Yak Barber

I have joked about the whole yak-shaving thing before. It’s true though, this project is 100 percent, grass-fed, locally-sourced, humanely-raised, fair-trade yak-shaving. It’s up on GitHub so I can publicly sort out my mental issues in front of everyone. It’s a single python file that handles all the processing, and another python file that handles all the settings. When this gets stable, I don’t want to edit my script to change variables, I shouldn’t need to at that point. It houses all the assets in a /templates directory, with the assets housed under their own subdirectory. This is for the inevitable fiddling with different things that I’ll want to test later, they can all be safely housed, discretely. Taking a cue from Casey, I used mustache for my templates instead of something more traditional, like Jinja2, or Mako, or something (or something stupid, like rolling my own again.)

The real big problem is the Atom feed. Mustache doesn’t seem to want to process clean XML for it even though the HTML is fine. Unicode characters are fun, when you never have to think about them.

The styling is rudimentary, and will be quickly revisited, but only after the Atom problems are sorted.

Most of the mess in the code is from making Python dictionaries as I went along. I will hopefully clean those up. It generates my site in about 1 second though, so it’s not really a performance problem, as much as it’s just ugly. Posts should really be a class with all the relevant metadata that will be needed later stored on the class.

Instead of using the codehilite extension of the Markdown package, I’m using highlight.js client-side. The few code snippets I have just don’t merit the fuss for setting the code type, and using particular delimiters just to satisfy that extension.

A preview of the ultra-minimalist-but-still-doesn’t-work site is here on the DigitalOcean VPS that houses prompt.photos.

2014-05-20 22:36:00

Category: text


Content Only Available Here — For One, Low, Monthly Fee

Bradley Chambers wrote a pretty nice little thingy on his bloggy-mah-jig entitled Content As A Service [sic]. He rationalizes the transition we’re making as a culture away from ownership of physical media to service payments for access to content libraries. The comparison is drawn with renting physical media from places like Blockbuster, and our cyclical purchasing of the same media when new, improved media formats would become availble (VHS → DVD).

His comparisons are sound, except they are not completely analogous to what we have in our current digital, streaming system. Imagine Amazon Prime Video, the iTunes Store, Netflix, Hulu Plus, Crackle (LOL! Crackle!), Flixster, etc. each as physical stores. Now imagine selecting which stores you paid for access to their media libraries monthly. Now imagine while physical stores carried the movies, and TV shows, you wanted to watch. You know you can’t get by on Netflix alone, so you’ll pay another. You’ll have some content that is identical between the two, but other content that is different. You’ll have content that rotates its availability in this physical store. There’s no concern about something being checked out, there are infinite copies when something is available. Now just pay forever and hope that you have enough content that doesn’t overlap to justify each service payment. The stores each have a vested interest in having content that is exclusive to their platform to make sure you pay.

Steaming services are definitely not a commodity, they are not full-access retailers. It can feel like they have relatively the same expense for you right now, but they have artificial limits between them that make them more like department stores that carry brands unique to them. Remember Montgomery Ward? Sears’ Kenmore line? Your local mattress store that sells just slightly different mattress models from the other local mattress store? Differentiating products is fun!

Content owners, the ones that license access to their content to the online servies, which in turn license access to you, are also mercurial. When a movie in a franchise is going to come out, associated content may be pulled off of some of the shelves and made available for purchase only, that’s not something that would happen with Blockbuster.

Now, let’s move on to another issue: there is no lending or resale. There are no bins of old movies at garage sales at a discount, there is no way to let your coworker watch Saving Silverman. You have to tell them to go look in their store themselves, and hope they subscribed to something which has the movie, or book, available.

This is also what’s killing new media at your public library. What will eventually strangle them. Physical media was easy for a public library to catalog and lend. There were rules that favored the public interests of making media available to the populace at large. The electronic systems for lending digital versions are not up to snuff. However, in a social, and cultural climate where even liberal-minded people have no interest in public libraries, it is of little concern. It’s relatively cheap, and easy, to get digital media, we tell ourselves. For now.

What Could Go Wrong With Monthly Payments?

Think about everything in your life that you pay for on a monthly basis. Your electrical bill, your cable bill, your internet access, your wireless bill, fees, taxes … Now think about how none of those rates climb at a rate you feel is unwarranted for the quality of service you receive.

Oh, you mean you do feel some of those climb at a rate you feel is unwarranted? But it’s so easy, you’re just subscribing, right?

What could go wrong with paying a small number of stores a monthly fee? It’ll be so small — at first.

Let’s look at cable. Cable pays for access to content, to provide you with channels. They do not do this a la carte, there are packages. Just like movie and television studios have overall distribution deals with Netflix, Amazon, etc. When the cable provider in your area has a disagreement with the content provider in your area, sometimes they loose channels for an indeterminate period of time. Also: The rate you are paying will increase, steadily over time even in the cases where they do work out agreements.

Most people pay a cable provider for internet access. Cable providers are already starting to squeeze digital content from the other end. Netflix very publicly raised its rates because of this. They’ve raise it before to try and pay for content. So it’s just going to keep increasing for access payments, and for content payments, and customers will pay an increasing subscription fee. Sure, you don’t have to worry about owning a copy of Who’s Afraid of Virginia Woolf? but is that copy going to be worth the cumulative monthly payments you will make over the course of your life?

I’m Not 💩-ing Over Everything

Streaming is still the direction we’re heading in as a society, and perhaps, some day, in some distant future, more rational legislation will allow for something will feel fair to all parties. Maybe it’ll just be robot death squads. That’d be cool too. Then I wouldn’t have to pay to stream Terminator 2.

2014-05-20 10:43:00

Category: text


Podguilts and Podguiltcatchers

I finished listening to Jordan Cooper’s latest second-latest episode of Tech Douchebags (I would pick another name, but that’s irrelevant now) and he interviewed Ÿoeff Rewburg about podcast listening. Hæff is my fake nemesis on Twitter – we both like to mildly antagonize one another. Gueph has gained some notoriety for the sheer volume of content he listens to, and infamy for how he listens to podcasts at an accelerated playback rate. I too, have podcast-listening feelings, apparently.

Jordan interviewed ¥€ff like a pro. He represented the questions the audience would have asked, but also made some statements about his own, personal listening habits – the kinds of assessments we, the audience, should make of ourselves.

Many months ago, I wrote Terrible Podcast Screenplays as a joke. I have not been actively updating it since early January. I simply haven’t found time for more podcasts in my daily routine. That is strange because I want to listen to more podcasts. I even have podcasts queued up for aspirational listening. One day, I’ll get to them. (SMASH CUT TO: Joe emerging from a pile of rubble with his iPhone. ‘Now I’ll have all the time in the world!’ iPhone runs out of battery power, and eardums explode.)

Honestly, these are the only ones I listen to weekly, or nearly weekly:

As you may notice, I wrote fake screenplay reviews for all if those except Your Daily Lex Luthor, and Supertrain. There are a few Incomparable episodes that I have skipped because I haven’t watched, or read, what was discussed, and want to do so before hearing them talk about it. (Although I still haven’t seen The Aviator and I listen to Back to Work, so what kind of monster does that make me?)

My aspirational list where I listen to episodes infrequently:

My super-crazy aspirational list, where I want to pretend like I’m cultured:

This is why I playfully mock ⓙ③Ⓕⓕ. Deep down, I want to just 2x all of these right in to my auditory canal. To Matrix-style download all the new episodes so that I’m never behind. The feeling of being behind — as Jordan puts it: of missing out — is unpleasant. I like the dopamine hit when I’m on top of everything. Jordan says that he prunes his list every now and then for this reason, so that he’s once more, on top of everything. However, we both do the same thing after that: We load back up on new podcasts.

Theoretically, this should be easier than television, because “DVR” time shifting is a fundamental part of the medium of podcasting. A podcast can never be live. A stream of a podcast recording session can be live, like ATP, or Back to Work, but those aren’t the podcasts that wind up in your podcast listening app of choice, technically. This causes me even more, totally unnecessary guilt because I’ll want to listen to the live stream, but often can’t immediately reserve an uninterrupted block of time for it right then. What does it matter? A cleaner, edited version will magically appear in Downcast before I know it, and I already have podcast episodes I haven’t listened to yet. Shouldn’t I be listening to those?

What happens when I have an opening in my schedule to power through a bunch of episodes? I power through them by podcast series, not by some universal, un-listened to playlist. So then I have even larger backlogs of some of the aspirational ones. I could listen to them all in the chronological list they were released but I might want to listen to them in the order released.

Like Ffej and Jordan, I struggle with when to ackowledge that I am just not going to get caught up on a podcast. When do I just remove the red badge of guilt from the feed?

This is Why I Can’t Use RSS Readers

My flirtation with RSS subscriptions for blogs, and web sites with feeeeds was short-lived when I realized that I couldn’t stay on top of the unread badge unless I hardly subscribed to anything. Some sites were like firehoses of crap, and other sites published irregularly. As soon as I saw the unread number, I had to click through every single one.

I have the same completionist streak when it comes to Twitter. Like ⒥⒪⒡⒡⒭⒠⒴ organizes his podcasts in to tiers and lists, I organize the people I follow on Twitter in to lists. I will start at the last read tweet for the list ‘tech’ and work my way up. Then, if I run out of ‘tech’, I pull up lists of real life friends, and then finally, when I run out of everything, I go to the full timeline view and read that. This is what crazy looks like right? Not a cat lady, but a guy with organized twitter lists and a fear of unread numbers in his RSS feeds? Oy gevalt.

At least two guys I follow on a twitter list posted YET ANOTHER podcast about podcasts to make me think about how guilty I feel about my unlistened, unread guilt. Way of the Future.

2014-05-19 14:14:00

Category: text


Adventures in Server Administration III: World's Greatest Redirect

The gents at The Prompt released a shirt for their podcast (I bought it, bro). People started sending in selfies, while wearing the shir,t to the show (not me, braaahh). In yesterday’s episode, Stephen Hackett announced he had started a photo gallery page to collect them on his site.

I contacted Stephen and asked if he’d like me to provide a redirect of the selfies from the joke URL, prompt.photos. As loyal reader(s) may recall, that was the joke project that got me started on this whole learning-web-hosting adventure.

Stephen asked for “/selfies” to be the redirect. Sure thing, I told him, no problem!

Problem

I couldn’t find documentation for how to implement a redirect in Twisted for a subdirectory, but leave the rest of the static site hosting alone. I looked on StackOverflow, and Twisted’s docs. I could find stuff for redirects, but it was all part of a code blocks that dynamically generated content, which wasn’t my situation. I logged in to Twisted’s IRC chatroom on Freenode, and asked there. I also started a StackOverflow account and asked there. Then I watched Archer since it didn’t seem like anyone was going to reply quickly.

Someone Replied Quickly

Not just any one, Glyph, the Twisted guy. He replied in the chat, but I missed it over the sound of cartoon violence. He even found my question on StackOverflow and answered it. Seriously impressed that he went out of his way to be so nice to a n00b.

from twisted.web.util import Redirect
fooDotComResource.putChild("bar", Redirect("http://foobar.com/bar"))

An import, and one line. Hard to screw that up.

Unfortunately, I didn’t understand how to apply his answer. I mistakenly tried to apply it to the site code at the bottom of the settings.py file where the “application” is generated. It didn’t work, so I went to bed. This morning, it dawned on me that I should have the redirect on the root class. That everything is just modifying root, not what’s generated from root. Sure enough, that worked. Thanks Glyph, you are l337.

Also, it took 12 hours for me to pull off this joke. Now that’s what I call comedic –

– timing.

2014-05-15 11:40:51

Category: text


Finally. — Liss is More

via

I like to bother Casey on Twitter (“💀 in a 🔥” is an achievement), but there’s no real snark. He’s a good guy, which is why I’ve blogged about how Camel inspired me, and I made him the star of the ATP podcast screenplay. I like it when good things happen to people that seem – from my distant vantage point – to deserve it. This is the best, saddest, most uplifting thing I’ve read in a long while. 🌱

2014-05-14 10:42:26

Category: link


The Unholy Chimera of Pelicamel

Hashtag Gee Tee Dee

As I’ve detailed many times before: I want to move off of Tumblr. I have a chronic fixation upon the act of starting to move. I have half-written, hand-crafted, artisanal blogging scripts that do work —more or less— but I haven’t followed through on a single thing. I’ve dabbled with Jekyll, Octopress, Pelican, WordPress, etc. I just don’t like any of them all that much. I am pretty convinced, at this point, that rolling my own will ultimately be where I wind up just so I can have every little thing exactly as I want it. I’ll just need to overcome the nigh insurmountable hurdle of my lack of education in software.

This is why I usually jump at the chance to write small scripts. Things that can move just enough data —connnnnnnntent— to make me feel like I have accomplished something, but not so large of a project that I pull the zip-cord on my procrastination parachute.

Joe Steel: Poorly Skilled Software Developer

I’ve mentioned, jokingly, that I have a BFA in Computer Animation — that’s really what the degree is in — but that’s not always a “funny-ha-ha” joke. It’s a way of explaining that I am ill-suited to software development of any scale. This is why I like to take on small tasks to actually give myself just enough to do to learn something.

These are the reasons why I’ve been experimenting with Casey Liss’ Node.js app, Camel. It’s a pretty succinct app (as long as you don’t research all the dependencies’ dependencies’ dependencies’) so it’s actually going to serve as something useful for me in an educational way, if nothing else.

I can’t write JavaScript though. I tried. I really, really, really tried. I’m going to have to put in more effort though because that is obviously the direction to head in if you’re looking for flexibility and the neatest, most modernest, hipstery stuff.

For now, I was content to make a content converter script. It’s a little thing, and I knocked it out in a few hours. (Shut up, PROS, I know this is like 10 minutes of code for you.)

I had tried Pelican recently, because of its Tumblr import utility. Unfortunately, I found the utility to be half-baked. It parses through your perfectly adequate HTML from Tumblr and tries to turn it in to ReStructured Text, or Markdown. I modified the script so it would stop doing that and I captured all of my posts. Unfortunately, Pelican is a little rough. It made a total mess of the index pages when it tried to process stuff, and even the things it got right had the weird appearance of looking like I had posted them instead of making it more obvious that they are Tumblr reblogs.

With all this data, I could just pipe it in to other blogging platforms, right? No. It seems every blogging platform has their own, slightly unique system for storing files and file metadata. Camel’s metadata is one of the stranger ones. I made a conversion script that copies the files and sets up the expected folder hierarchy, and converts the metadata.

This isn’t the kind of thing I see anyone using, but I wanted to see what putting it up on GitHub would be like. I’ve used gists on GitHub, but I’ve never needed a project. I had a private repository with a friend on Bitbucket, for one of the blogging component experiments, but Bitbucket is kind of weird.

I even did all the command-line git stuff to put it up. That’s quite an accomplishment for me, because I come from a background where version control is just incrementing an integer in a filename.

Anyway, at this rate, I should have 10,000,000 pointless scripts and no blog.

2014-05-10 02:51:00

Category: text


Chicken Little - With Beats Audio™

Another retirement was announced at Apple and people clutched their pearls. What could it mean? They must not like it there! That means things are bad! There’s more than one person that is leaving! That must be really bad!

Then everyone forgot about that crisis by the afternoon because there was a new rumor that Apple was buying Beats. Twitter has been apeshit since then. Otherwise normal, rational human-beings are loosing their minds. Everything from assuming Beats branding will be on products, to assuming that Apple is buying them for hardware —which has been near-universally panned by the judgy-judges despite most acknowledging they have no firsthand experience with said Beats products. It’s like it’s an election year and your party didn’t win so you’re declaring you’re moving to Canada —again.

Pull out of this nosedive and unclench your bowels.

Do you know what happened yesterday? The sun came out.

Do you know what happened today? The sun came out.

Do you know what happens tomorrow? The sun will come out.

It’s fine to have theories, to guess why something is happening, but the sky isn’t falling. That is not to suggest people should be apathetic. That they can’t have feelings. After all, last week I lambasted Comixology on the basis of their actions. Actions are different from theories cooked up on Twitter, based off of a rumor that simply states a company will be acquired. Who the hell knows what will happen? Tim Cook could just as easily adopt the branding as shutter it all. After all, this acquisition will hurt current Beats partners.

If anything I think this is a symptom of cognitive whiplash. People that have spent many years formulating their deeply-held, personal beliefs about Beats Audio are face-to-face with the prospect of Apple acquiring the company they hate. How much of this deep, personal conviction has to do with audio, and how much of it has to do with Beats branding appearing on PC’s and Android devices, will vary from person to person.

A quick, and easy, way to avoid having to back out of etched-in-stone opinions is to not have etched-in-stone opinions.

My own personal bias is that I just wasn’t ever going to pony-up the dough for their stuff, but I also don’t burn money on Apple earbuds/earpods either. It’s not one vs. the other. It’s not even about the best ones money can buy, or the best value. I’m just not that demanding in this area. I did try their music app on my iPhone when people were lauding the service at its premiere, but I was turned off by the selection mechanism, and by the way the app breaths battery-life like air. So no, I don’t love them. I’m just not going to flip-my-shit. I neither look forward to it, nor shun it, until the waveform collapses — when there is action.

What is all this teeth-gnashing getting us, other than a bigger bill from our dentists, and a lot of headaches?

2014-05-09 09:36:00

Category: text