Emoji Tac Toe Opened Sourced

Happy Father’s Day!

 

To celebrate my 28th Father’s Day I’ve opened sourced Emoji Tac Toe. It’s actually not a big deal to anyone but me. It’s kinda of scary open sourcing code that you wrote alone and without first cleaning it up. But what the heck. If someone can learn something from this code, why keep it locked away. It’s already been on GitHub for a year. It’s not getting any prettier under lock and key.

You can find the source code at github.com/jpavley/Emoji-Tac-Toe2. And you can download the iOS app on the App Store at John Pavley > Emoji Tac Toe.

You can play Emoji Tac Toe on your iPhone, your iPad, and your Apple Watch. (As long as you are running iOS 9.3 or later.)

I guess I should chat a little bit about the code just in case you want to take a peek.

First

I plan on refactoring the code quite a bit. I want to basically refactor it so that the core is separate from the iOS implementation and I can port it easily to the web and to Android. Maybe Windows too. Who knowns! I’m going to start this process by adding unit tests and then by tearing it apart.

Second

I plan on updating the code for iOS 11, including Swift 4 and ARKit. I’ve been meaning to add multiplayer over BlueTooth and MessageKit capabilities. I also want to complete the tvOS and macOS implementations.

Third

The core code lives in the EmojiTicTacToe.swift file. Since there are more emoji than I can count I have cherry picked the 1100 that I wanted to include. This is still too many and I should cut it down further. It’s too many emoji because choosing which emoji to play with is difficult. I can’t use Apple’s keyboard user interface because I can’t restrict it to just showing emoji. And I don’t want to waste my time recreating Apple’s design. Also, this game is not about typing anything so a keyboard doesn’t make sense.

Instead I create an array of emoji and it works very well. iOS is great at dealing with Unicode.

Tic Tac Toe is an ancient game and simple. There only eight winning vectors. So, it’s easy to brute force and just check any board for the eight vectors.

As emoji are text it’s simple to translate a game board into a string and back. Interoperability with messaging and tweeting is free. This is why I love emoji! Rich graphics without the cost of image file management. Once day when operating systems allow custom emoji we’ll stop using PNGs and JPEGs altogether. On that day the web will be more fast and safe than ever!

Given the simplicity of the game, my AI is equally simple. When it’s the AI’s turn, I look for an open cell, look for a blocking move or look for a winning move using the eight winning vectors as my guide. Because tic tac toe is too easy to prevent absolute boredom I add a bit of random error into the AI’s thinking so that if the player is paying attention she can beat the machine.

Four

ViewController.swift contains iPhone/iPad specific code.

I found I needed some iPad specific code to avoid a crash when presenting Apple’s standard share UIActivityViewController. I did not open a radar.

I handle several gestures that I’m sure my players never discover but they are there none the less:

  • A long press on an emoji can trigger an attack if battle mode is enabled. A few emoji will do cool tricks in battle mode. There are several battle mode strategy functions that implement these tricks. My favorite is youWin which lets the other player win.
  • Panning up and down turns sounds on and off. That should be a standard gesture for all games!
  • A shake starts are new game with a random pair of emoji. This is the best way to start a new game as choosing particular emoji is a pain.

Five

NewGameViewController.swift contains the code for the game settings on the iPhone/iPad.

Originally, I had the iPhone and Watch Extension collaborate so that one could control the other. But the effort was not worth the reward. Now the two version are completely independent.

I use a UIPickerView with two components to enable the player to choose two emoji. It’s not bad at all if there were only 20 or 30 emoji. But it’s just too much spinning to find a particular emoji out of 1100!

If the user tries to choose the same emoji for player 1 and player 2 (or the AI) I detect that and have the UIPickerView jump to the next emoji. See ensureRowsAreUnique(component: row:).

To make finding a particular emoji a bit easier I allow the player to jump over groups of emoji in the UIPickerView by tapping on the labels for each player. I’m guessing nobody would ever find this feature but the labels are colored blue to indicate they buttons.

Six

InterfaceController.swift contains the code for a very simple version of Emoji Tac Toe that runs on watchOS. I actually like this version if the game best. No battle mode, no sound, no popovers, no choice of emoji. Just a single player game you tap out on your watch while waiting for the train.

Programming the UI for watchOS reminded me of my VisualBasic days! Each button view has it own handler function. No way to aggregate the touches and dispatch them with a switch statement!

Final Notes

All-in-all this code is pretty rough and need a lot of work. But it does work and hardly ever crashes. So that’s something. There is a half-finished tvOS implementation but I’m going to rethink it so don’t look at it!

I had to delete the sound effect that I didn’t create myself. Your build of Emoji Tac Toe will not sound like mine. But otherwise you are free, within the MIT License constraints, to do what you like with the code.

Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

Faceless Phone

About twelve years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants will become jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do. But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!

Code Markup in Xcode

Screen Shot 2016-05-28 at 12.58.13 PM

I’m working on a fairly large Swift project. Actually no, that’s not quite true. I’m working on a Swift project with a ViewController file that is getting disorganized and out of control. If this keeps up I might have a large project on my hands but right now it’s just a single file that is getting larger than I would like.

Apple provides some quick and dirty tools that make it easy to navigate a single file with specially formatted comments in your code. This functionality doesn’t provide automated documentation like Headerdoc. And that’s fine with me. I like how Headerdoc has become a mash up of Markdown and JavaDoc. My code is just not stable enough for documenting yet.

Happily Xcode’s built-in special comment parser is enough in the early stages of development to help me navigate a large file and remember where the bodies are buried.

Xcode supports the following out of the box:

  • MARK: (your text here)
  • MARK: – (section divider)
  • ???: Question
  • !!!!: Warning
  • TODO: Task
  • FIXME: Bug

Xcode’s special comments mark up the function navigation  pop-up menu so that you can find your questions, warnings, tasks, and bugs in your code without a overtaxing your the private neural network in your skull. Unfortunately you can’t add new special comments and they don’t show up in the Symbol Navigator.

(Using the MARK: comment you can simulate adding your own special comments. MARK: doesn’t add the word MARK: in front of navigation items in the way that the other special comments do (TODO, FIXME, etc.). So you can use MARK: NOTE to navigate to notes in your Swift code if that makes you happy.)

I use the following additional special comments to keep my code organized and consistent. (Xcode will just ignore them unless I prefix each with MARK:)

  • NOTE: (when the function name is not enough)
  • HINT: (a non-obvious reminder about a bit of code)
  • DBUG: (end of line comment marking code that probably should be removed eventually)
  • DEMO: (example usage)

It would be nice if Apple allowed us to personalize code markup in Xcode. But only after search and ranking in the App Store are fixed and a 1000 other higher priories are done!

The Quiet Car

I ride a commuter train to and from work everyday and occasionally I accidentally, regrettably, end up sitting in the quiet car.

If you’re not a commuter you might be unacquainted with the idea of a quiet car. It is what it says it is: a train car where you are supposed to be quiet. No talking. No phone ringing. No music leaking out of your headphones. I call it the train car of silent tension.

A few years ago NJ Transit declared the first and last cars of all morning and evening commuter trains to be quiet cars. They had little signs printed up that read “Quiet Commute” with the “mute” in “commute” highlighted.

I don’t think NJ Transit invented the idea of the quiet car. But their conductors and passengers, well some of them, love to enforce it. Violate the rules in the quiet car and several self-appointed quiet car monitors will put you in your place with a tone of voice that is so sternly condescending that your victorian great grandmother would be right at home.

My problem with the quiet car is that somebody always breaks the rules and gets scolded. And I’m just not the sort of guy who enjoys the sight of one human being being a righteous jerk to another human being. The quiet car is the only place I’ve ever been where it’s ok for adults to act like conceited little kindergarteners.

I can’t concentrate or relax in the quiet car because I’m just waiting for some poor oblivious victim to innocently answer a call, make a comment to a friend, or forget to turn the volume down on their phone.

I think people ride the quiet car not for the quiet but for the chance to rebuke the guilty who transgress the sacred decree of the car of silence. “Thou hast made a peep and thou shalt be most vigorously censored!”

I only ride the quiet car when I have no choice, when the rest of the train is full, when I find myself in not so quiet desperation for a seat.

I’d like to observe that quiet cars were probably a great idea in the 1950s or 60s. But now we have inexpensive headphones. Instead of making everyone uncomfortable you can just pop a pair of headphones on your cranky victorian-minded gray haired noggin and listen to soothing national anthems or the sounds of suburban lawns growing. With the marvelous invention of headphones you can allow the rest of us to catch up with a friend, take an important call, or just take a nap without having to fear a sudden outburst of “Sir! Sir! Miss! Miss! This is the QUIET CAR! You can’t talk here! No Talking!”

But the way, I just want to point out that the quiet car is not only elitist but kind of classist and racist as well. Almost always the rule breaker is Italian or from a non-Waspy culture where talking is what you do when you are sitting next to a friend or family member. But in the quiet car the uptight, my-ancestors-are-better-than-your-ancestors, people rule.

If we must have a quiet car, and it seems they are not going away, then I must insist that we have a shouting car. It’s only fair. In the shouting car people can let out all that tension built up from riding in the quiet car and even TYPE IN ALL CAPS while texting.

 

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Binge Watching Handmade Hero

Screenshot (1)

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.