Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Telling Time as an Engineer

Time is the most precious resource. It’s in limited supply, once spent we can’t get it back, and you can’t trade it directly. This might sound a little radical but most global, national, business, and personal problems, seem to me, to boil down to problem of time and who’s time is more important than yours.

Before we can decide how we should spend the time given us, we have to put some thought into the process of analysis of tasks which take time. In software development a considerable amount of thinking has been applied to just this analysis. We usually call it “process management” and “time management”. Many methodologies have been created to solve the problem of time and yet when the rubber hits the road the management of time, which includes task prioritization and effort estimation, is full of errors and random results.

A great example is the Agile Development Processes, which has become the standard as well as declared dead by many of its original creators. Why is this?

Here is a simple example…

A high priority story is pulled from a general backlog and estimated, along with other stories, by an experienced engineering team. A product owner then weighs the cost of the stories based on the effort estimations and value to the business and feed them into a sprint backlog. The engineering team then works on each story, in order of priority, and completes the required stories by the end of the sprint. The work completed is demoed to the stakeholders and everyone is happy as everyone’s time has been well spent.

Well, except this happy plan almost never happens.

Something like 80% of all feature and products are delivered late or not at all. And often when a feature or product is delivered its buggy enough that we regret delivering it on time if at all. I’m sure Samsung engineers are less concerned about deadlines these days and more concerned about taking the time to do their tasks with more quality. Blizzard has made a billion dollar business of never giving dates for games and missing them when they do. Facebook and Spotify just spring new feature on their users without any warning and kill bad ones before they spread beyond a small segment.

It my opinion successful tech companies don’t bother with time management and leave schedules and task estimates to unsuccessful tech companies. I’m not saying successful tech companies don’t do agile or create project plans. I’m saying these are more like historical accounts and data gathered for analysis than pseudo-predictive planning.

Why is task estimation so non-predictive?

The problem is that it’s impossible to know how long a task will take unless you have done exactly that task before. When I worked at Apple Computer (before it was just Apple) we said that in order to to understand how long a project would take you had to build it and then write the schedule.

This is why experienced engineering team is so important in effort estimation. If you get a group of engineers who are a bit long in tooth they can work together to pool estimates on work they have performed previously.

But much of the work of an experience engineering team is work they have never done before. Experience engineers tend to see everything though the lens of previous experience. The result is that effort estimates are inaccurate as they have mistaken a novel task for an nostalgic task. I can’t count the number times I have said, “I thought the problem was X and it would take Y story points, but the problem is really Z and I’m still doing the research so your guess is as good as mine.”

The fact that for novel work your guess is as good is mine is why startups of inexperience engineers succeed with problems that mature companies fail at. The boss says “This problem will take too long to get to market. Let’s just not do it. It’s a waste of time.” The boss also says, “Hey brilliant engineers, you didn’t deliver this product on time! You suck! You’re fired.” Both judgements are typical of mature companies where value has to be delivered every quarter and experimental failure damage previous reputations.

In a typical tech startup, or any kind of new business, if you did the estimates you would have never started down the path. But startups don’t care! They are labors of vision and love usually staffed by people without experience who don’t know better. A good startup certainly doesn’t worry about effort estimates or punish engineers for not being able to tell time.

My advice to any engineering team that needs to worry about time is as follows:

One

You need a mix of experienced and inexperienced engineers on the team. This doesn’t mean old and young as much as people who have done it before and people who have not. Mix your insiders with your outsiders. For example if you’re building a web app bring in a a few mobile devs to the sprint planning. And some interns. And listen to the outsiders. Engage in a real discussion.

Two

If someone in charge wants to know how long a novel task will take from just the title of the task, without any true discussion, walk away. You’re going to give them a wrong answer. By the way good estimates are rarely rewarded–they are expected! But bad estimates are almost always punished. An honest “I don’t know” is always better than “2-3 weeks” or “2-3 story points”.

Three

Remember there is no value in hitting the deadline without quality, performance, or value to the user. In fact I’m always a little suspicious of teams that never miss a deadline. Apps that crash or cutting scope to the point of no visible progress is a hallmark of teams that hit their deadlines. I’m not saying don’t work hard or try to hit your deadline just be tough about the result at the end of the schedule: Give it more time if needed!

Four

The problem with my advice is that everyone wants to know the schedule. So many other functions depend on the schedule. Releasing a product on time is critical to the business. So my final piece of advice, and you’re not going to like it, is let the business set the deadline. Instead of wasting everyone’s time upfront with a broken planning process to arrive at a deadline, get the deadline first and work backward with effort estimates. While time is limited the amount of time we spend on on a task is flexible. We work differently if we have 3 months, 6 months, or 12 months to accomplish a task. Ask any college kid how much time they put into their studies at the end beginning of the semester when time seems unlimited vs the end of the semester when time is in short supply.

Time is always in short supply.

Trolls Are USA

It’s clear that Americans are more divided than ever. Our self-segregating tendencies have been reinforced by the adoption of Internet technologies and algorithms that personalize our newsfeeds to the point that we walk side-by-side down the same streets in different mental worlds.

Before the web, before iPhone, Netflix, and Facebook, the physical limits of radio, television, and print technology meant that we had to share. We had to share the airwaves and primetime and the headlines because they were limited resources.

In the pre-Internet world print was the cheapest communication to scale and thus the most variable. Anyone with a few hundred bucks could print a newsletter but these self-published efforts were clearly inferior to the major newspapers. You could tell yellow journalism from Pulitzer winners just by the look of the typography and feel of the paper in your hands. This was true with books and magazines as well. Quality of information was for the most part synonymous with quality of production.

To put on a radio or TV show you had to be licensed and you needed equipment and technical skills from unionized labor. Broadcast was more resource intensive and thus more controlled than print and thus more trusted. In 1938 The War of Worlds radio drama fooled otherwise skeptical Americans into believing they were under attack by Martian invaders. The audience was fooled because the show was presented not as a radio play but a series of news bulletins breaking into otherwise regularly scheduled programming.

The Broadcast technologies of the pre-social media world coerced us into consensus. We had to share them because they were mass media, one-to-many communications where the line between audience and broadcaster was clear and seldom crossed.

Then came the public Internet and the World Wide Web of decentralized distribution. Then came super computers in our pockets with fully equipped media studios in our hands. Then came user generated content, blogging, and tweeting such that there were as many authors as there were audience members. Here the troll was born.

Before the Internet the closest we got to trolling was the prank phone call. I used to get so many prank phone calls as high schooler in the 1970s that I simply answered the phone with a prank: “FBI HQ, Agent Smith speaking, how may I direct your call?” Makes me crack up to this day!

If you want to blame some modern phenomenon for the results of the 2016 presidential election, and not the people who didn’t vote, or the flawed candidates, or the FBI shenanigans, then blame the trolls. You might think of the typical troll as a pimply-faced kid in his bedroom with the door locked and the window shades taped shut but those guys are angels compared to the real trolls: the general public. You and me.

Every time you share a link to a news article you didn’t read (which is something like 75% of the time), every time you like a post without critically thinking about it (which is almost always), and every time you rant in anger or in anxiety in your social media of choice you are the troll.

I can see that a few of my favorite journalists and Facebook friends want to blame our divided culture, the spread of misinformation, and the outcome of the election on Facebook. But that’s like blaming the laws of thermal dynamics for a flood or the laws of motion for a car crash. Facebook, and social media in general, was the avenue of communication not the cause. In technology terms, human society is a network of nodes (people) and Facebook, Google, and Twitter are applications that provide easy distribution of information from node to node. The agents that cause info to flow between the social network nodes are human beings not algorithms.

It’s hard not to be an inadvertent troll. I don’t have the time to read and research every article that a friend has shared with me. I don’t have the expertise to fact-check and debunk claims outside of my area of expertise. Even when I do share an article about a topic I deeply understand, it’s usually to get a second opinion.

From a tech perspective, there are a few things Facebook, Google, and Twitter can do to keep us from trolling each other. Actually, Google is already doing most of these things with their Page Rank algorithms and quality scores for search results. Google even hires human beings to test and verify the results of their search results. Thus, it’s really hard for us to troll each other with phony web pages claiming to be about cats when dogs are the topic. Kudos to Google!

The following advice is for Facebook and Twitter from admiring fan…

First, hire human editors. You’re a private company not a public utility. You can’t be neutral, you are not neutral, so stop pretending to be neutral. I don’t care which side you pick, just pick a side, hire some college educated, highly opinionated journalists, and edit our news feeds.

Second, give us a “dislike” button and along with it “true” and “false” buttons. “Like” or “retweet” are not the only legitimate responses that human beings have to news. I like the angry face and the wow face but those actions are feelings and thus difficult to interpret clearly in argumentation and discourse. Dislike, true, and false would create strong signals that could help drive me and my friends to true consensus through real conversations.

Third, give us a mix of news that you predict we would like and not like. Give us both sides or all sides. And use forensic algorithms to weed out obvious trash like fake news sites, hate groups with nice names, and teenagers pretending to be celebrities.

A/B test these three ideas, and better ones, and see what happens. My bet is social media will be a healthier place but a small place with less traffic driven by the need to abuse each other.

We’ll still try to troll the hell out of each other but it will be more time consuming. Trolling is part of human nature and so is being lazy. So just make it a little harder to troll.

Before social media our personal trolling was limited to the dinner table or the locker room. Now our trolling knows no bounds because physical limits don’t apply on the Internet. We need limits, like spending limits on credit cards, before we troll ourselves to death.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

Faceless Phone

About twelve years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants will become jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do. But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!

Code Markup in Xcode

Screen Shot 2016-05-28 at 12.58.13 PM

I’m working on a fairly large Swift project. Actually no, that’s not quite true. I’m working on a Swift project with a ViewController file that is getting disorganized and out of control. If this keeps up I might have a large project on my hands but right now it’s just a single file that is getting larger than I would like.

Apple provides some quick and dirty tools that make it easy to navigate a single file with specially formatted comments in your code. This functionality doesn’t provide automated documentation like Headerdoc. And that’s fine with me. I like how Headerdoc has become a mash up of Markdown and JavaDoc. My code is just not stable enough for documenting yet.

Happily Xcode’s built-in special comment parser is enough in the early stages of development to help me navigate a large file and remember where the bodies are buried.

Xcode supports the following out of the box:

  • MARK: (your text here)
  • MARK: – (section divider)
  • ???: Question
  • !!!!: Warning
  • TODO: Task
  • FIXME: Bug

Xcode’s special comments mark up the function navigation  pop-up menu so that you can find your questions, warnings, tasks, and bugs in your code without a overtaxing your the private neural network in your skull. Unfortunately you can’t add new special comments and they don’t show up in the Symbol Navigator.

(Using the MARK: comment you can simulate adding your own special comments. MARK: doesn’t add the word MARK: in front of navigation items in the way that the other special comments do (TODO, FIXME, etc.). So you can use MARK: NOTE to navigate to notes in your Swift code if that makes you happy.)

I use the following additional special comments to keep my code organized and consistent. (Xcode will just ignore them unless I prefix each with MARK:)

  • NOTE: (when the function name is not enough)
  • HINT: (a non-obvious reminder about a bit of code)
  • DBUG: (end of line comment marking code that probably should be removed eventually)
  • DEMO: (example usage)

It would be nice if Apple allowed us to personalize code markup in Xcode. But only after search and ranking in the App Store are fixed and a 1000 other higher priories are done!

The Quiet Car

I ride a commuter train to and from work everyday and occasionally I accidentally, regrettably, end up sitting in the quiet car.

If you’re not a commuter you might be unacquainted with the idea of a quiet car. It is what it says it is: a train car where you are supposed to be quiet. No talking. No phone ringing. No music leaking out of your headphones. I call it the train car of silent tension.

A few years ago NJ Transit declared the first and last cars of all morning and evening commuter trains to be quiet cars. They had little signs printed up that read “Quiet Commute” with the “mute” in “commute” highlighted.

I don’t think NJ Transit invented the idea of the quiet car. But their conductors and passengers, well some of them, love to enforce it. Violate the rules in the quiet car and several self-appointed quiet car monitors will put you in your place with a tone of voice that is so sternly condescending that your victorian great grandmother would be right at home.

My problem with the quiet car is that somebody always breaks the rules and gets scolded. And I’m just not the sort of guy who enjoys the sight of one human being being a righteous jerk to another human being. The quiet car is the only place I’ve ever been where it’s ok for adults to act like conceited little kindergarteners.

I can’t concentrate or relax in the quiet car because I’m just waiting for some poor oblivious victim to innocently answer a call, make a comment to a friend, or forget to turn the volume down on their phone.

I think people ride the quiet car not for the quiet but for the chance to rebuke the guilty who transgress the sacred decree of the car of silence. “Thou hast made a peep and thou shalt be most vigorously censored!”

I only ride the quiet car when I have no choice, when the rest of the train is full, when I find myself in not so quiet desperation for a seat.

I’d like to observe that quiet cars were probably a great idea in the 1950s or 60s. But now we have inexpensive headphones. Instead of making everyone uncomfortable you can just pop a pair of headphones on your cranky victorian-minded gray haired noggin and listen to soothing national anthems or the sounds of suburban lawns growing. With the marvelous invention of headphones you can allow the rest of us to catch up with a friend, take an important call, or just take a nap without having to fear a sudden outburst of “Sir! Sir! Miss! Miss! This is the QUIET CAR! You can’t talk here! No Talking!”

But the way, I just want to point out that the quiet car is not only elitist but kind of classist and racist as well. Almost always the rule breaker is Italian or from a non-Waspy culture where talking is what you do when you are sitting next to a friend or family member. But in the quiet car the uptight, my-ancestors-are-better-than-your-ancestors, people rule.

If we must have a quiet car, and it seems they are not going away, then I must insist that we have a shouting car. It’s only fair. In the shouting car people can let out all that tension built up from riding in the quiet car and even TYPE IN ALL CAPS while texting.

 

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Binge Watching Handmade Hero

Screenshot (1)

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.