Unauthoritative Pronouncements

Subscribe
About

If Wishes Were Yaks

Last night, I had a big post about Yak Barber, and how it was to a point that was functional-enough to post about it. However, after I had posted it, I kept fiddling with it anyway. My milestone of ‘well the Atom feed is broken’ wasn’t good enough for myself. I ended up staying up several more hours trying to do the thing I said I would wait to do. Even after that, I couldn’t fall asleep immediately because I was thinking of all the spaghetti code I want to yank out (yak out?).

Self Critique

Importing the settings from the settings.py file relies on the imp module. This is so I can have a command line argument to specify a different location. If I just hardcoded a path, then it would work, but you’d always use the same path. Python will gladly import the settings.py file. It’ll also gladly complain that it’s not really how that’s supposed to work. So kudos for whining.

The settings that do get read in are then piped to local variables. This is is in the event that I need to override something locally with future command line args. It will also throw a warning if these required variables are missing.

root = settings.root
webRoot = settings.webRoot
contentDir = settings.contentDir
templateDir = settings.templateDir
outputDir = settings.outputDir
sitename = settings.sitename
author = settings.author
md = settings.md
postsPerPage = settings.postsPerPage
typekitId = settings.typekitId

Then, because of the way python is structured, the main function is called. This triggers some steps to make sure directories are there, and if not, create them. It in turn triggers start() which handles the broad strokes of processing the pages and moving files by delegating out to other functions. Nothing bad here, really.

Then, unfortunately, you get to the part that renders the pages:

def renderPost(post, posts):
  metadata = {}
  for k, v in post[0].iteritems():
    metadata[k] = v[0]
  metadata[u'content'] = post[1]
  metadata[u'sitename'] = sitename
  metadata[u'webRoot'] = webRoot
  metadata[u'author'] = author
  metadata[u'typekitId'] = typekitId
  postName = removePunctuation(metadata[u'title'])
  postName = metadata[u'date'].split(' ')[0] + '-' + postName.replace(u' ',u'-')
  postName = u'-'.join(postName.split('-'))
  postFileName = outputDir + postName + '.html'
  metadata[u'postURL'] = webRoot + postName + '.html'
  metadata[u'title'] = unicode(smartypants.smartypants(metadata[u'title']))
  with open(templateDir + u'/post-content.html','r','utf-8') as f:
    postContentTemplate = f.read()
    postContent = pystache.render(postContentTemplate,metadata,decode_errors='ignore')
    metadata['post-content'] = postContent
  with open(templateDir + u'/post-page.html','r','utf-8') as f:
    postPageTemplate = f.read()
    postPageResult = pystache.render(postPageTemplate,metadata,decode_errors='ignore')
  with open(postFileName,'w','utf-8') as f:
    f.write(postPageResult)
  return posts.append(metadata)

The markdown module generates a Python dictionary from the MMD-style metadata. That’s just the crap at the top of the file with colons. First empty line, or line without colons, seperates the metadata from your text in your document. I’m overriding several values in the dictionary with the variables we’ve pulled from settings.py.

If you think that’s not so bad, just wait until you see the the part that makes index pages and the atom feed! It’s like opening a wall in your house and finding black mold. Oh boy!

def feed(posts):
  feedDict = posts[0]
  entryList = str()
  feedDict['gen-time'] = datetime.datetime.utcnow().isoformat('T') + 'Z'
  with open(templateDir + u'/atom.xml','r','utf-8') as f:
    atomTemplate = f.read()
  with open(templateDir + u'/atom-entry.xml','r','utf-8') as f:
    atomEntryTemplate = f.read()
  for e,p in enumerate(posts):
    p[u'date'] = RFC3339Convert(p[u'date'])
    p[u'content'] = extractTags(p[u'content'],'script')
    p[u'content'] = extractTags(p[u'content'],'object')
    p[u'content'] = extractTags(p[u'content'],'iframe')
    if e < 100:
      atomEntryResult = pystache.render(atomEntryTemplate,p)
      entryList += atomEntryResult
  feedDict['atom-entry'] = entryList
  feedResult = pystache.render(atomTemplate,feedDict,string_encode='utf-8')
  with open(outputDir + 'feed','w','utf-8') as f:
    f.write(feedResult)

def paginatedIndex(posts):
  indexList = sorted(posts,key=lambda k: k[u'date'])[::-1]
  feed(indexList)
  postList = []
  for i in indexList:
    postList.append(i['post-content'])
  indexOfPosts = splitEvery(postsPerPage,indexList)
  with open(templateDir + u'/index.html','r','utf-8') as f:
    indexTemplate = f.read()
  indexDict = {}
  indexDict[u'sitename'] = sitename
  indexDict[u'typekitId'] = typekitId
  for e,p in enumerate(indexOfPosts):
    indexDict['post-content'] = p
    print e
    for x in p:
      print x['title']
    if e == 0:
      fileName = u'index.html'
      if len(indexList) > postsPerPage:
        indexDict[u'previous'] = webRoot + u'index2.html'
    else:
      fileName = u'index' + str(e+1) + u'.html'
      if e == 1:
        indexDict[u'next'] = webRoot + u'index.html'
        indexDict[u'previous']  = webRoot + u'index' + str(e+2) + u'.html'
      else:
        indexDict[u'previous'] = webRoot + u'index' + str(e+2) + u'.html'
        if e < len(indexList):
          indexDict[u'next'] = webRoot + u'index' + str(e-1) + u'.html'
    indexPageResult = pystache.render(indexTemplate,indexDict)
    with open(outputDir + fileName,'w','utf-8') as f:
      f.write(indexPageResult)

You’ll notice that there’s a lot of repeated keys and values from the renderPosts() function. In fact, it even starts with the output the previous function generated. The way that the Python version of Mustache was handling lists in the dictionary resulted in an Atom XML file with Python list syntax being wedged between all the XML tags. That wasn’t cool. It needed to be looped over separately in just the right way to make a dictionary pystache wouldn’t barf all over. Because it needs to have things handled in a special way, and because the index pagination needs to happen in a special way, I have to jump through these hoops to make these separate, looped dictionaries just to make the “simple” template engine happy. That is a lot of code duplication in there setting dictionary keys.

You’ll also notice the feed has date string shenanigans in order to meet the UTC (Greenwich Mean Time) requirement of Atom, the tantalizingly named RFC-3339 specification. Turns out, Python doesn’t really have a singular module that does this. So I get to use time, datetime, and pytz to parse the date and time, convert it to 2 different formats, stick an origin timezone on it, and feed it to a third party library that actually understands what timezones are. To say that this is a deficiency of Python would be an understatement.

def RFC3339Convert(timeString):
  strip = time.strptime(timeString, '%Y-%m-%d %H:%M:%S')
  dt = datetime.datetime.fromtimestamp(time.mktime(strip))
  pacific = pytz.timezone('US/Pacific')
  ndt = dt.replace(tzinfo=pacific)
  utc = pytz.utc
  return ndt.astimezone(utc).isoformat().split('+')[0] + 'Z'

But don’t worry, that’s not the only thing the W3C Feed Validation Widget wanted to yell at me for! It is also considered “bad” to have <script>,<object>, and <iframe> tags in your XML, even if they’re inside of the part that’s just HTML. This means things like YouTube embeds and Twitter’s embedded tweets need to be sanitized. It is certainly, without a doubt, not even worthwhile to use embedded tweets in the future.

After spelunking through StackOverflow all night for time problems, I got to go back and look for the best way to remove tags. It is not considered wise to use regex on XML/HTML to remove tags. Fine, eggheads, what do you recommend? Enter BeautifulSoup.

def extractTags(html,tag):
  soup = BeautifulSoup.BeautifulSoup(html)
  to_extract = soup.findAll(tag)
  for item in to_extract:
    item.extract()
  return unicode(soup)

This explains all those entries in feed where the content key kept getting scribbled over. That isn’t the right way to do that. I should feed a list of tags to remove to the extractTags() function and I should write to a new key on the dictionary. If I did that, then the information could co-exist with the original content value and I could make it part of renderPosts().

Basically, there should be one, singular dictionary, or Class object, that carries all the information a post. There should be one, singular dictionary, or Class object, that carries all of the posts together. Then all the functions should yank data from that one tree of data instead of generating what is, essentially, identical stuff.

In terms of benchmarks:


1939885 function calls (1925524 primitive calls) in 2.171 seconds
1939758 function calls (1925400 primitive calls) in 3.014 seconds
1939758 function calls (1925400 primitive calls) in 2.663 seconds

Processing 122 discrete text files in 2-3 seconds doesn’t strike me as especially bad. The biggest offender is — surprise — Python’s regex substitution. A teeny-tiny part of the code that removes characters from the titles to generate filenames and URL’s. Oh regex! (arms akimbo)

I should have just gone to bed.

2014-05-21 15:00:00

Category: text


I’m Basically Made of Idle Threats

I have complained an awful lot about Tumblr on Tumblr. You know, because I can. I have been doing things on the side to try and get a system together that I am happy with so I can transition on to my own, self-hosted solution. This is a point of pride, not of practicality.

I had Sid yelling about how great Statamic is, Casey wrote his own blogging engine, Jerph kept talking about how great Jekyll is, several recommendations for Pelican came through… but none of them are a perfect fit for tiny, nitpicky reasons.

The simpler the system is, the less immediate control I have over the system. The level of abstraction in the code is too dense for me to flip some bits and get exactly what I want without committing time to learning some very specific things. Much of this has to do with styling. I want something very minimal, but even the most minimal templates have all kinds of options buried in them.

This is also why I have started, and stopped, writing something like 10-12 blogging engines over the last 3 years. I haven’t launched anything on any of the platforms because they were all fundamentally flawed. I was trying to do too much all at once to make any of it work. Like a Dropbox API link to pull down files and convert them using my own templating system in Pythonista —an iOS app. That was pretty nutty.

The current attempt was inspired by Casey Liss’ Camel engine —to a degree— but more about the fact that he rolled his own, than anything else. The kinds of organizational changes I wanted to make to the files in his system were not ones he’d want to pull down because this is his engine for him, and I needed to make my engine for me.

Yak Barber

I have joked about the whole yak-shaving thing before. It’s true though, this project is 100 percent, grass-fed, locally-sourced, humanely-raised, fair-trade yak-shaving. It’s up on GitHub so I can publicly sort out my mental issues in front of everyone. It’s a single python file that handles all the processing, and another python file that handles all the settings. When this gets stable, I don’t want to edit my script to change variables, I shouldn’t need to at that point. It houses all the assets in a /templates directory, with the assets housed under their own subdirectory. This is for the inevitable fiddling with different things that I’ll want to test later, they can all be safely housed, discretely. Taking a cue from Casey, I used mustache for my templates instead of something more traditional, like Jinja2, or Mako, or something (or something stupid, like rolling my own again.)

The real big problem is the Atom feed. Mustache doesn’t seem to want to process clean XML for it even though the HTML is fine. Unicode characters are fun, when you never have to think about them.

The styling is rudimentary, and will be quickly revisited, but only after the Atom problems are sorted.

Most of the mess in the code is from making Python dictionaries as I went along. I will hopefully clean those up. It generates my site in about 1 second though, so it’s not really a performance problem, as much as it’s just ugly. Posts should really be a class with all the relevant metadata that will be needed later stored on the class.

Instead of using the codehilite extension of the Markdown package, I’m using highlight.js client-side. The few code snippets I have just don’t merit the fuss for setting the code type, and using particular delimiters just to satisfy that extension.

A preview of the ultra-minimalist-but-still-doesn’t-work site is here on the DigitalOcean VPS that houses prompt.photos.

2014-05-20 22:36:00

Category: text


Content Only Available Here — For One, Low, Monthly Fee

Bradley Chambers wrote a pretty nice little thingy on his bloggy-mah-jig entitled Content As A Service [sic]. He rationalizes the transition we’re making as a culture away from ownership of physical media to service payments for access to content libraries. The comparison is drawn with renting physical media from places like Blockbuster, and our cyclical purchasing of the same media when new, improved media formats would become availble (VHS → DVD).

His comparisons are sound, except they are not completely analogous to what we have in our current digital, streaming system. Imagine Amazon Prime Video, the iTunes Store, Netflix, Hulu Plus, Crackle (LOL! Crackle!), Flixster, etc. each as physical stores. Now imagine selecting which stores you paid for access to their media libraries monthly. Now imagine while physical stores carried the movies, and TV shows, you wanted to watch. You know you can’t get by on Netflix alone, so you’ll pay another. You’ll have some content that is identical between the two, but other content that is different. You’ll have content that rotates its availability in this physical store. There’s no concern about something being checked out, there are infinite copies when something is available. Now just pay forever and hope that you have enough content that doesn’t overlap to justify each service payment. The stores each have a vested interest in having content that is exclusive to their platform to make sure you pay.

Steaming services are definitely not a commodity, they are not full-access retailers. It can feel like they have relatively the same expense for you right now, but they have artificial limits between them that make them more like department stores that carry brands unique to them. Remember Montgomery Ward? Sears’ Kenmore line? Your local mattress store that sells just slightly different mattress models from the other local mattress store? Differentiating products is fun!

Content owners, the ones that license access to their content to the online servies, which in turn license access to you, are also mercurial. When a movie in a franchise is going to come out, associated content may be pulled off of some of the shelves and made available for purchase only, that’s not something that would happen with Blockbuster.

Now, let’s move on to another issue: there is no lending or resale. There are no bins of old movies at garage sales at a discount, there is no way to let your coworker watch Saving Silverman. You have to tell them to go look in their store themselves, and hope they subscribed to something which has the movie, or book, available.

This is also what’s killing new media at your public library. What will eventually strangle them. Physical media was easy for a public library to catalog and lend. There were rules that favored the public interests of making media available to the populace at large. The electronic systems for lending digital versions are not up to snuff. However, in a social, and cultural climate where even liberal-minded people have no interest in public libraries, it is of little concern. It’s relatively cheap, and easy, to get digital media, we tell ourselves. For now.

What Could Go Wrong With Monthly Payments?

Think about everything in your life that you pay for on a monthly basis. Your electrical bill, your cable bill, your internet access, your wireless bill, fees, taxes … Now think about how none of those rates climb at a rate you feel is unwarranted for the quality of service you receive.

Oh, you mean you do feel some of those climb at a rate you feel is unwarranted? But it’s so easy, you’re just subscribing, right?

What could go wrong with paying a small number of stores a monthly fee? It’ll be so small — at first.

Let’s look at cable. Cable pays for access to content, to provide you with channels. They do not do this a la carte, there are packages. Just like movie and television studios have overall distribution deals with Netflix, Amazon, etc. When the cable provider in your area has a disagreement with the content provider in your area, sometimes they loose channels for an indeterminate period of time. Also: The rate you are paying will increase, steadily over time even in the cases where they do work out agreements.

Most people pay a cable provider for internet access. Cable providers are already starting to squeeze digital content from the other end. Netflix very publicly raised its rates because of this. They’ve raise it before to try and pay for content. So it’s just going to keep increasing for access payments, and for content payments, and customers will pay an increasing subscription fee. Sure, you don’t have to worry about owning a copy of Who’s Afraid of Virginia Woolf? but is that copy going to be worth the cumulative monthly payments you will make over the course of your life?

I’m Not 💩-ing Over Everything

Streaming is still the direction we’re heading in as a society, and perhaps, some day, in some distant future, more rational legislation will allow for something will feel fair to all parties. Maybe it’ll just be robot death squads. That’d be cool too. Then I wouldn’t have to pay to stream Terminator 2.

2014-05-20 10:43:00

Category: text


Podguilts and Podguiltcatchers

I finished listening to Jordan Cooper’s latest second-latest episode of Tech Douchebags (I would pick another name, but that’s irrelevant now) and he interviewed Ÿoeff Rewburg about podcast listening. Hæff is my fake nemesis on Twitter – we both like to mildly antagonize one another. Gueph has gained some notoriety for the sheer volume of content he listens to, and infamy for how he listens to podcasts at an accelerated playback rate. I too, have podcast-listening feelings, apparently.

Jordan interviewed ¥€ff like a pro. He represented the questions the audience would have asked, but also made some statements about his own, personal listening habits – the kinds of assessments we, the audience, should make of ourselves.

Many months ago, I wrote Terrible Podcast Screenplays as a joke. I have not been actively updating it since early January. I simply haven’t found time for more podcasts in my daily routine. That is strange because I want to listen to more podcasts. I even have podcasts queued up for aspirational listening. One day, I’ll get to them. (SMASH CUT TO: Joe emerging from a pile of rubble with his iPhone. ‘Now I’ll have all the time in the world!’ iPhone runs out of battery power, and eardums explode.)

Honestly, these are the only ones I listen to weekly, or nearly weekly:

As you may notice, I wrote fake screenplay reviews for all if those except Your Daily Lex Luthor, and Supertrain. There are a few Incomparable episodes that I have skipped because I haven’t watched, or read, what was discussed, and want to do so before hearing them talk about it. (Although I still haven’t seen The Aviator and I listen to Back to Work, so what kind of monster does that make me?)

My aspirational list where I listen to episodes infrequently:

My super-crazy aspirational list, where I want to pretend like I’m cultured:

This is why I playfully mock ⓙ③Ⓕⓕ. Deep down, I want to just 2x all of these right in to my auditory canal. To Matrix-style download all the new episodes so that I’m never behind. The feeling of being behind — as Jordan puts it: of missing out — is unpleasant. I like the dopamine hit when I’m on top of everything. Jordan says that he prunes his list every now and then for this reason, so that he’s once more, on top of everything. However, we both do the same thing after that: We load back up on new podcasts.

Theoretically, this should be easier than television, because “DVR” time shifting is a fundamental part of the medium of podcasting. A podcast can never be live. A stream of a podcast recording session can be live, like ATP, or Back to Work, but those aren’t the podcasts that wind up in your podcast listening app of choice, technically. This causes me even more, totally unnecessary guilt because I’ll want to listen to the live stream, but often can’t immediately reserve an uninterrupted block of time for it right then. What does it matter? A cleaner, edited version will magically appear in Downcast before I know it, and I already have podcast episodes I haven’t listened to yet. Shouldn’t I be listening to those?

What happens when I have an opening in my schedule to power through a bunch of episodes? I power through them by podcast series, not by some universal, un-listened to playlist. So then I have even larger backlogs of some of the aspirational ones. I could listen to them all in the chronological list they were released but I might want to listen to them in the order released.

Like Ffej and Jordan, I struggle with when to ackowledge that I am just not going to get caught up on a podcast. When do I just remove the red badge of guilt from the feed?

This is Why I Can’t Use RSS Readers

My flirtation with RSS subscriptions for blogs, and web sites with feeeeds was short-lived when I realized that I couldn’t stay on top of the unread badge unless I hardly subscribed to anything. Some sites were like firehoses of crap, and other sites published irregularly. As soon as I saw the unread number, I had to click through every single one.

I have the same completionist streak when it comes to Twitter. Like ⒥⒪⒡⒡⒭⒠⒴ organizes his podcasts in to tiers and lists, I organize the people I follow on Twitter in to lists. I will start at the last read tweet for the list ‘tech’ and work my way up. Then, if I run out of ‘tech’, I pull up lists of real life friends, and then finally, when I run out of everything, I go to the full timeline view and read that. This is what crazy looks like right? Not a cat lady, but a guy with organized twitter lists and a fear of unread numbers in his RSS feeds? Oy gevalt.

At least two guys I follow on a twitter list posted YET ANOTHER podcast about podcasts to make me think about how guilty I feel about my unlistened, unread guilt. Way of the Future.

2014-05-19 14:14:00

Category: text


Adventures in Server Administration III: World’s Greatest Redirect

The gents at The Prompt released a shirt for their podcast (I bought it, bro). People started sending in selfies, while wearing the shir,t to the show (not me, braaahh). In yesterday’s episode, Stephen Hackett announced he had started a photo gallery page to collect them on his site.

I contacted Stephen and asked if he’d like me to provide a redirect of the selfies from the joke URL, prompt.photos. As loyal reader(s) may recall, that was the joke project that got me started on this whole learning-web-hosting adventure.

Stephen asked for “/selfies” to be the redirect. Sure thing, I told him, no problem!

Problem

I couldn’t find documentation for how to implement a redirect in Twisted for a subdirectory, but leave the rest of the static site hosting alone. I looked on StackOverflow, and Twisted’s docs. I could find stuff for redirects, but it was all part of a code blocks that dynamically generated content, which wasn’t my situation. I logged in to Twisted’s IRC chatroom on Freenode, and asked there. I also started a StackOverflow account and asked there. Then I watched Archer since it didn’t seem like anyone was going to reply quickly.

Someone Replied Quickly

Not just any one, Glyph, the Twisted guy. He replied in the chat, but I missed it over the sound of cartoon violence. He even found my question on StackOverflow and answered it. Seriously impressed that he went out of his way to be so nice to a n00b.

from twisted.web.util import Redirect
fooDotComResource.putChild("bar", Redirect("http://foobar.com/bar"))

An import, and one line. Hard to screw that up.

Unfortunately, I didn’t understand how to apply his answer. I mistakenly tried to apply it to the site code at the bottom of the settings.py file where the “application” is generated. It didn’t work, so I went to bed. This morning, it dawned on me that I should have the redirect on the root class. That everything is just modifying root, not what’s generated from root. Sure enough, that worked. Thanks Glyph, you are l337.

Also, it took 12 hours for me to pull off this joke. Now that’s what I call comedic –

– timing.

2014-05-15 11:40:51

Category: text


Finally. — Liss is More

via

I like to bother Casey on Twitter ("💀 in a 🔥" is an achievement), but there’s no real snark. He’s a good guy, which is why I’ve blogged about how Camel inspired me, and I made him the star of the ATP podcast screenplay. I like it when good things happen to people that seem – from my distant vantage point – to deserve it. This is the best, saddest, most uplifting thing I’ve read in a long while. 🌱

2014-05-14 10:42:26

Category: link


The Unholy Chimera of Pelicamel

Hashtag Gee Tee Dee

As I’ve detailed many times before: I want to move off of Tumblr. I have a chronic fixation upon the act of starting to move. I have half-written, hand-crafted, artisanal blogging scripts that do work —more or less— but I haven’t followed through on a single thing. I’ve dabbled with Jekyll, Octopress, Pelican, WordPress, etc. I just don’t like any of them all that much. I am pretty convinced, at this point, that rolling my own will ultimately be where I wind up just so I can have every little thing exactly as I want it. I’ll just need to overcome the nigh insurmountable hurdle of my lack of education in software.

This is why I usually jump at the chance to write small scripts. Things that can move just enough data —connnnnnnntent— to make me feel like I have accomplished something, but not so large of a project that I pull the zip-cord on my procrastination parachute.

Joe Steel: Poorly Skilled Software Developer

I’ve mentioned, jokingly, that I have a BFA in Computer Animation — that’s really what the degree is in — but that’s not always a “funny-ha-ha” joke. It’s a way of explaining that I am ill-suited to software development of any scale. This is why I like to take on small tasks to actually give myself just enough to do to learn something.

These are the reasons why I’ve been experimenting with Casey Liss’ Node.js app, Camel. It’s a pretty succinct app (as long as you don’t research all the dependencies’ dependencies’ dependencies’) so it’s actually going to serve as something useful for me in an educational way, if nothing else.

I can’t write JavaScript though. I tried. I really, really, really tried. I’m going to have to put in more effort though because that is obviously the direction to head in if you’re looking for flexibility and the neatest, most modernest, hipstery stuff.

For now, I was content to make a content converter script. It’s a little thing, and I knocked it out in a few hours. (Shut up, PROS, I know this is like 10 minutes of code for you.)

I had tried Pelican recently, because of its Tumblr import utility. Unfortunately, I found the utility to be half-baked. It parses through your perfectly adequate HTML from Tumblr and tries to turn it in to ReStructured Text, or Markdown. I modified the script so it would stop doing that and I captured all of my posts. Unfortunately, Pelican is a little rough. It made a total mess of the index pages when it tried to process stuff, and even the things it got right had the weird appearance of looking like I had posted them instead of making it more obvious that they are Tumblr reblogs.

With all this data, I could just pipe it in to other blogging platforms, right? No. It seems every blogging platform has their own, slightly unique system for storing files and file metadata. Camel’s metadata is one of the stranger ones. I made a conversion script that copies the files and sets up the expected folder hierarchy, and converts the metadata.

This isn’t the kind of thing I see anyone using, but I wanted to see what putting it up on GitHub would be like. I’ve used gists on GitHub, but I’ve never needed a project. I had a private repository with a friend on Bitbucket, for one of the blogging component experiments, but Bitbucket is kind of weird.

I even did all the command-line git stuff to put it up. That’s quite an accomplishment for me, because I come from a background where version control is just incrementing an integer in a filename.

Anyway, at this rate, I should have 10,000,000 pointless scripts and no blog.

2014-05-10 02:51:00

Category: text


Chicken Little - With Beats Audio™

Another retirement was announced at Apple and people clutched their pearls. What could it mean? They must not like it there! That means things are bad! There’s more than one person that is leaving! That must be really bad!

Then everyone forgot about that crisis by the afternoon because there was a new rumor that Apple was buying Beats. Twitter has been apeshit since then. Otherwise normal, rational human-beings are loosing their minds. Everything from assuming Beats branding will be on products, to assuming that Apple is buying them for hardware —which has been near-universally panned by the judgy-judges despite most acknowledging they have no firsthand experience with said Beats products. It’s like it’s an election year and your party didn’t win so you’re declaring you’re moving to Canada —again.

Pull out of this nosedive and unclench your bowels.

Do you know what happened yesterday? The sun came out.

Do you know what happened today? The sun came out.

Do you know what happens tomorrow? The sun will come out.

It’s fine to have theories, to guess why something is happening, but the sky isn’t falling. That is not to suggest people should be apathetic. That they can’t have feelings. After all, last week I lambasted Comixology on the basis of their actions. Actions are different from theories cooked up on Twitter, based off of a rumor that simply states a company will be acquired. Who the hell knows what will happen? Tim Cook could just as easily adopt the branding as shutter it all. After all, this acquisition will hurt current Beats partners.

If anything I think this is a symptom of cognitive whiplash. People that have spent many years formulating their deeply-held, personal beliefs about Beats Audio are face-to-face with the prospect of Apple acquiring the company they hate. How much of this deep, personal conviction has to do with audio, and how much of it has to do with Beats branding appearing on PC’s and Android devices, will vary from person to person.

A quick, and easy, way to avoid having to back out of etched-in-stone opinions is to not have etched-in-stone opinions.

My own personal bias is that I just wasn’t ever going to pony-up the dough for their stuff, but I also don’t burn money on Apple earbuds/earpods either. It’s not one vs. the other. It’s not even about the best ones money can buy, or the best value. I’m just not that demanding in this area. I did try their music app on my iPhone when people were lauding the service at its premiere, but I was turned off by the selection mechanism, and by the way the app breaths battery-life like air. So no, I don’t love them. I’m just not going to flip-my-shit. I neither look forward to it, nor shun it, until the waveform collapses — when there is action.

What is all this teeth-gnashing getting us, other than a bigger bill from our dentists, and a lot of headaches?

2014-05-09 09:36:00

Category: text


Boss Too, Shall Pass

Someone dispatches a frantic, urgent, flailing message to you over Microsoft Lync. There is so much urgency. Fires must be put out. You must answer for the fires existing. Did I mention this was urgent? Put it in your “urgent” pile. The one sorted by urgency.

There are different styles of management. Some may take the team out for coffee. Some may enforce a no overtime rule on Fridays. Some may say the world us burning every five fucking minutes.

It can be really difficult to work with people that are constantly bombarding you with emergencies, because it turns you in to a support structure. Instead of having a boss that facilitates good work, you have the burden of managing a grown adult’s mood.

You doubt yourself, of course, because obviously you must have done something to anger the person. You failed. Then you start to realize that’s like trying to appease a volcano. Maybe it has nothing to do with you?

My grandmother used to say, “And this too, shall pass.” Not just about the good times, but about the bad. So you have a supervisor that buys you beer? Enjoy it because he won’t be your boss forever. You have a boss that’s a flaming hemroid? Prepare to move on from him.

As an employee, it is difficult to separate your feelings from your current situation. To remind yourself that you have done work that was good – that people have even thanked you before. That’s the most important perspective to maintain. This is but one of many bosses.

At least there’s Lync.

2014-05-07 23:58:42

Category: text


Liss is More

via

caseyliss:

This past week I debuted my new website, which I’m currently calling Liss is More. While I’m not formally sunsetting my Tumblr braaaaand, I will likely post here quite a bit less often. Which is saying something, since I post here so rarely as it is. Or, perhaps, I’ll post here more, but it will be pictures of cars and other useLiss stuff.

I spoke about some of the motivations for that site in my introductory post.

Additionally, if you’re a nerd, you can check out the source code for the engine that runs the site.

So, check it out, and subscribe to the RSS feed, if you’d like to see more.

Confession: I kind of want to be a dick and put in some pull requests in CoffeeScript just to mess with him.

Ahem.

Gotta do something about that CSS though, CaSSey. Maybe in LESS, just for the pun oppurtunities. The links aren't even styled at all, it's just browser default.

And THIS is why I have no blog up yet. I fall down a CSS rabbit hole every time. I could Steel his.

2014-05-06 18:42:49

Category: link