JavaScript, Swift, and Kotlin Oh My!

 

Apple II User's Guide book cover

In the 1981 I cracked open my first real book on computer programming: Apple II User’s Guide by Lon Poole with Martin McNiff and Steven Cook. I still have it sitting in my bookshelf 36 years later. Previous to the Apple II User’s Guide I was playing around with typing-in game and program code from hobbyist magazines like Compute!

But now I felt ready to write an original program. I had no education in Computer Science, and I didn’t own an Apple II computer. But armed with the information in the Apple II User’s Guide I knew I was going create a original program. What that program would do was not as important to me the process of actually doing it. After several decades in software engineering, I still feel the same way. I don’t care much what my programs do (now we call them apps or services). I care very much how they are built and the processes and tools by which they are built.

Two of the chapters in the Apple II User’s Guide are about the computer itself and how to use it. The other six chapters and appendices A through L are all about programming. Which, at the time, made a lot of sense. Unless you had a business of some type, the main purpose of using a general purpose personal computer in the 80’s was programming it. Otherwise it was an or overpriced home gaming system.

In 1981, the main and as as far as I knew only, programming language was BASIC. The idea of BASIC is summed up by expanding it’s acronym: Beginner’s All-purose Symbolic Instruction Code. A simple, high-level programming designed for teaching newbies how to code. Unfortunately for me, BASIC didn’t do well what it was designed to do.

I read the Apple II User’s Guide from cover to cover. I highlighted passages on almost every page. But I never did write that original program for the Apple II.

BASIC as in the 1980s was a much simpler and unforgiving language than the programming languages of today. Some versions of BASIC only supported Integers and almost all limited variable names to only two significant characters. Lower-case letter were not supported. Features programmers take for granted today, objects, classes, protocols, constants, and named functions didn’t exist. The core features BASIC did have were strings, arrays, conditionals, loops, simple IO, and the ability to jump to any line in code by it’s number.

None of our modern tooling was available on an Apple II back then. Instead of an integrated development environment you had modes: immediate, editing, and running. Basic programs were written with numbered lines and you had plan out the construction of your code such that you left enough room to add lines between groups of statements or you were constantly renumbering your code. As the Apple II guide notes on page 51, “The simplest way to change a program line is to retype it.”

BASIC programming code

Debugging was particularly hairy. Programmers had only a handful of primitive tools. The TRACE command printed the code as it executed. The DSP command printed a particular variable every time its value changed. Whatever the MON command it, I never could figure out to work it properly. So like most hobbyist programmers of the day I used print statements littered through my code to check on the state of variables and order of execution of the subroutines. A simple and reliable technique that works to this day.

Like I said, I got so caught up in the complexity of programming an Apple II in BASIC that I never wrote a significant original program for that machine. (Later I would figure it all out but for the cheaper home computers of the 80’s with more advanced BASICS like the TI99/4a and the Commodore 64.)

Looking back on it, without modern programming languages and modern tools and most importantly without the web, YouTube, and Stack Overflow, I honestly don’t know how I learned to program anything. (But I did and where it took me is a story for another time.)

Today we have the opposite problem: Hundreds of programming languages and tools to choose from and hundreds of platforms upon which to program. Apple alone has four operating systems (macOS, iOS, tvOS, and watchOS) and supports three programming languages (C/C++, Objective-C, and Swift). Google has Android, Chrome, and Google Cloud on the OS side and Java, JavaScript, Python, Go, DART, and now Kotlin on the coding side. Microsoft, Facebook, and Amazon all have clouds, platforms, and a rich set of programming languages.

And then there are hundreds of communities centered around boutique programming languages. My favorites include Elm, LUA, and LISP. (By the way, it was LISP that taught me truly how to program. Learning LISP is the best thing you can do if you don’t have a computer science degree and you want to punch above your weight.)

In 1981 my problem was learning one language on one machine. In 2017 there are so many combinations of programming languages and platforms that it can seem like an O(n!) problem to sort through them all! Most engineers today need to learn JavaScript, HMTL, CSS, PHP, and SQL to program on the web, C++, Java, C#, or Go for hardcore backend services, and either Java or Objective-C to create native mobile applications. Plus it’s really important to understand several UNIX commands, a few scripting languages like Python or Perl, and tools like GIT, Xcode, Unity, Visual Studio, and Android Studio. At least I seem to need to understand the essentials of all of these in order to tackle the general sorts of programming challenges and opportunities thrown my way.

Yikes! 😣

In the last few years, the major players in the world of technology seem to be converging towards a programming language mean. While BASIC, LISP, and C++ were once very popular and are very different, the newer programming languages seem to be very similar.

JavaScript started this tread by adopting the features of strongly-typed, object-oriented, and functional languages, while keeping its boilerplate-free syntax. JavaScript has become the BASIC of the modern era–Easy to learn and widely available. JavaScript used to be a terrible programming language. One of my favorite books, JavaScript the Good Parts, spends most of its pages on the bad parts. But JavaScript is evolving rapidly with the ECMAScript 2016 standard, dialects like TypeScript, and platforms like Node.js.

Apple and Google seem to be noticing how powerful and yet accessible JavaScript is becoming. Instead of adopting JavaScript for their mobile platforms they are doing something almost as good: Creating and supporting JavaScript-like languages.

Apple started this trend a few years back with its surprise introduction of Swift. At first the Apple programmer community was a bit miffed. After decades of working with Objective-C and it’s highly idiosyncratic syntax, Apple seems to be abandoning billions of lines of code for a pretty but slow and immature language that had just sprung into existence unasked for and unneeded.

Except that something better than Objective-C was needed. The bar for programming Objective-C is very high. And it’s only used in the Apple universe. So it was hard to learn to code iOS apps and hard to find programmers that are expert in iOS apps.

Most importantly Apple rapidly evolved Swift, much to the horror of many engineering managers, so that the Swift 3.0 of today is an expressive general purpose programming language and a model for where JavaScript could go.

At Google IO, just a couple of weeks ago, Google, perhaps out of Apple-envy, surprised its programmer community by announcing “first-class” support of Kotlin. Until that announcement the Android world revolved around Java and C++. While Java and C++ are more mainstream than Objective-C they still represent a cognitive hurtle for mobile programmers and created a shortage of Android developers.

Kotlin, one of the many interesting JVM languages, compiles to byte code and feels much more JavaScripty in its expression. Android programmers were already using Kotlin but with Google’s official blessing support in Android Studio’s and other tools is going make using Kotlin easier for beginners and experts alike.

So the web, Apple, and Google are converging on a programming languages that are similar but not exactly the same. Where are they going?

Here are three bodies of code. Can you spot the TypeScript, Swift, and Kotlin?

A

B

C

While these are not very sophisticated lines of code, they do show how these languages are converging. (A is TypeScript. B is Swift. C is Kotlin.)

In the above example the first line declares and defines a variable and the second line prints it to the console (standard output).

The keyword let means something different in TypeScript and Swift but has the same general sense of the keyword val in Kotlin. Personally, I prefer the way Swift uses let: it declares a variable as a constant and enforces the best practice of immutability. Needless mutability is the source of so many bugs that I really appreciate Xcode yelling at me when I create mutable variable that never changes. Unfortunately Kotlin uses val instead of let for the same concept. Let in TypeScript is used to express block-scoping (which this variable is local to the function where its is declared). Block-scoping is mostly built into Swift and Kotlin: They don’t need a special keyword for it.

Just from the example above you can already see that jumping between TypeScript (JavaScript), Swift, and Kotlin should be pretty easy for the average programmer and more importantly, code should be pretty sharable between these three languages.

So them why are JavaScript/TypeScript, Swift, and Kotlin so similar?

Because programming as a human activity has matured. We programmers now know what we want:

  • Brevity: Don’t make me type!
  • No boilerplate: Don’t make me repeat myself!
  • Immutability by default and static typing: Help me not make stupid mistakes!
  • Declarative syntax: Let me make objects and data structures in a single line of code!
  • Multiple programming styles including object-oriented and functional: One paradigm doesn’t fit all programming problems.
  • Fast compile and execution: Time is the one resource we can’t renew so let’s not waste it!
  • The ability share code with the frontend and the backend: Because we live in a semi-connected world!

There you have it. Accidentally, in a random and evolutionary way, the world of programming is getting better and more interoperable, without anyone in charge. I love when that happens.

Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

Faceless Phone

About twelve years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants will become jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do. But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!

Code Markup in Xcode

Screen Shot 2016-05-28 at 12.58.13 PM

I’m working on a fairly large Swift project. Actually no, that’s not quite true. I’m working on a Swift project with a ViewController file that is getting disorganized and out of control. If this keeps up I might have a large project on my hands but right now it’s just a single file that is getting larger than I would like.

Apple provides some quick and dirty tools that make it easy to navigate a single file with specially formatted comments in your code. This functionality doesn’t provide automated documentation like Headerdoc. And that’s fine with me. I like how Headerdoc has become a mash up of Markdown and JavaDoc. My code is just not stable enough for documenting yet.

Happily Xcode’s built-in special comment parser is enough in the early stages of development to help me navigate a large file and remember where the bodies are buried.

Xcode supports the following out of the box:

  • MARK: (your text here)
  • MARK: – (section divider)
  • ???: Question
  • !!!!: Warning
  • TODO: Task
  • FIXME: Bug

Xcode’s special comments mark up the function navigation  pop-up menu so that you can find your questions, warnings, tasks, and bugs in your code without a overtaxing your the private neural network in your skull. Unfortunately you can’t add new special comments and they don’t show up in the Symbol Navigator.

(Using the MARK: comment you can simulate adding your own special comments. MARK: doesn’t add the word MARK: in front of navigation items in the way that the other special comments do (TODO, FIXME, etc.). So you can use MARK: NOTE to navigate to notes in your Swift code if that makes you happy.)

I use the following additional special comments to keep my code organized and consistent. (Xcode will just ignore them unless I prefix each with MARK:)

  • NOTE: (when the function name is not enough)
  • HINT: (a non-obvious reminder about a bit of code)
  • DBUG: (end of line comment marking code that probably should be removed eventually)
  • DEMO: (example usage)

It would be nice if Apple allowed us to personalize code markup in Xcode. But only after search and ranking in the App Store are fixed and a 1000 other higher priories are done!

The Quiet Car

I ride a commuter train to and from work everyday and occasionally I accidentally, regrettably, end up sitting in the quiet car.

If you’re not a commuter you might be unacquainted with the idea of a quiet car. It is what it says it is: a train car where you are supposed to be quiet. No talking. No phone ringing. No music leaking out of your headphones. I call it the train car of silent tension.

A few years ago NJ Transit declared the first and last cars of all morning and evening commuter trains to be quiet cars. They had little signs printed up that read “Quiet Commute” with the “mute” in “commute” highlighted.

I don’t think NJ Transit invented the idea of the quiet car. But their conductors and passengers, well some of them, love to enforce it. Violate the rules in the quiet car and several self-appointed quiet car monitors will put you in your place with a tone of voice that is so sternly condescending that your victorian great grandmother would be right at home.

My problem with the quiet car is that somebody always breaks the rules and gets scolded. And I’m just not the sort of guy who enjoys the sight of one human being being a righteous jerk to another human being. The quiet car is the only place I’ve ever been where it’s ok for adults to act like conceited little kindergarteners.

I can’t concentrate or relax in the quiet car because I’m just waiting for some poor oblivious victim to innocently answer a call, make a comment to a friend, or forget to turn the volume down on their phone.

I think people ride the quiet car not for the quiet but for the chance to rebuke the guilty who transgress the sacred decree of the car of silence. “Thou hast made a peep and thou shalt be most vigorously censored!”

I only ride the quiet car when I have no choice, when the rest of the train is full, when I find myself in not so quiet desperation for a seat.

I’d like to observe that quiet cars were probably a great idea in the 1950s or 60s. But now we have inexpensive headphones. Instead of making everyone uncomfortable you can just pop a pair of headphones on your cranky victorian-minded gray haired noggin and listen to soothing national anthems or the sounds of suburban lawns growing. With the marvelous invention of headphones you can allow the rest of us to catch up with a friend, take an important call, or just take a nap without having to fear a sudden outburst of “Sir! Sir! Miss! Miss! This is the QUIET CAR! You can’t talk here! No Talking!”

But the way, I just want to point out that the quiet car is not only elitist but kind of classist and racist as well. Almost always the rule breaker is Italian or from a non-Waspy culture where talking is what you do when you are sitting next to a friend or family member. But in the quiet car the uptight, my-ancestors-are-better-than-your-ancestors, people rule.

If we must have a quiet car, and it seems they are not going away, then I must insist that we have a shouting car. It’s only fair. In the shouting car people can let out all that tension built up from riding in the quiet car and even TYPE IN ALL CAPS while texting.

 

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Binge Watching Handmade Hero

Screenshot (1)

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.

 

Most Improved Award for Windows 10

If there was an award for most improved in the world of tech I would award it to Windows 10. While I am a daily Mac user, I am no stranger to Windows. Actually, let me correct myself. I live inside iOS, work in Mac OS, play around on Windows, and occationally find need of an Android device. I think that makes me a good judge of where Windows 10 sits in comparison to all the major operating systems offered today. (Linux, yes I used to be into you, but Mac OS is more than enough UNIX for me.)

I’m old enough to remember when Macs were relegated to the less serious passions, graphics and science labs, while Windows machines were the sturdy beasts that bore our burdens during work. Ironically the situation seems to be reversed. If I have a job to do, that can’t be done on a phone, I need a Mac. If I want to fool around in virtual reality or inside an MMO at 60 FPS, I need a Windows PC. Windows 10 is Microsoft’s near miss at reclaiming the dull and boring world of the workhorse personal computer.

I had reason to buy a non-gaming PC laptop last week. I’m following along with Handmade Hero and since Casey Muratori is using a Windows machine to demo how to write a game from scratch I wanted to do the same. Via Amazon I bought a decent Dell XPS 13, refurbished, at a 50% discount. It’s a lot like a MacBook Air: Light, beautiful no-touch screen, and well constructed feel. The keyboard is a little loose as compared to a MacBook. And like a MacBook Air the graphic card and CPU are under powered but it’s totally usable for software development and the processing of words, numbers, emails, and webpages. This blog post is being written on it.

Windows 10 is Microsoft’s response to Mac OS and iOS. And it’s pretty easy to see that Apple is watching closely what Microsoft is doing with Windows 10 and discovering new ways to improve Mac OS and iOS. However, Redmond has to do a better job of learning from Cupertino.

Windows 10 is innovative and interesting but has many odd holes, rough patches, and weird leftover bits from Windows of the past. It feels rushed and as if there is only a small band of engineers behind it. It’s a tad ugly as if the UX designers called out sick a few days before polishing the new look and feel. If I wasn’t a 30 year veteran of Windows and PCs I’d be lost and confused when it comes to navigating around and installing software. As it is, I’m “Binging” basic operations where on the Mac I’d just be able to wing it.

Let me give you a concrete example…

Windows 10 has a system wide spell checking feature. While I was typing this blog post, in the sleek Edge web browser using the web-based WordPress text editor, I had to turn off Windows 10 spell checking. It was underlying entire paragraphs with red wavy lines! And yet I still have spell checking. So who is doing the spell checking if I turned it off? A mystery!

Another mystery is that at first I could not find the place to turn off spell checking in the Windows 10 Setting panel. I had to ask Cortana. She’s a nice lady and all but I pride myself on being able to find things in computer operating system. I now know that spell checking is found under Settings->Devices->Typing. What threw me was “devices” (that makes me think of something like a printer, a separate device) and the lack of the term “keyboard” anywhere in the UX.

It’s as if the person who designed the Windows 10 Settings panel is a young AI just figuring out object from subject and parts from wholes. I keep running into little stumbles like this along the way as use Windows. I’m sure there is a punch list at Microsoft with a thousand tiny little fixes that are not mission critical but would make a big difference in how the end-user’s experience of Windows 10 flows.

So, good job Microsoft. Better than I expected. Keep it up. I suggest hiring a really mean, obsessive, and uncompromising UX designer and putting her or him in charge of Windows 11.