JavaScript, Swift, and Kotlin Oh My!

 

Apple II User's Guide book cover

In the 1981 I cracked open my first real book on computer programming: Apple II User’s Guide by Lon Poole with Martin McNiff and Steven Cook. I still have it sitting in my bookshelf 36 years later. Previous to the Apple II User’s Guide I was playing around with typing-in game and program code from hobbyist magazines like Compute!

But now I felt ready to write an original program. I had no education in Computer Science, and I didn’t own an Apple II computer. But armed with the information in the Apple II User’s Guide I knew I was going create a original program. What that program would do was not as important to me the process of actually doing it. After several decades in software engineering, I still feel the same way. I don’t care much what my programs do (now we call them apps or services). I care very much how they are built and the processes and tools by which they are built.

Two of the chapters in the Apple II User’s Guide are about the computer itself and how to use it. The other six chapters and appendices A through L are all about programming. Which, at the time, made a lot of sense. Unless you had a business of some type, the main purpose of using a general purpose personal computer in the 80’s was programming it. Otherwise it was an or overpriced home gaming system.

In 1981, the main and as as far as I knew only, programming language was BASIC. The idea of BASIC is summed up by expanding it’s acronym: Beginner’s All-purose Symbolic Instruction Code. A simple, high-level programming designed for teaching newbies how to code. Unfortunately for me, BASIC didn’t do well what it was designed to do.

I read the Apple II User’s Guide from cover to cover. I highlighted passages on almost every page. But I never did write that original program for the Apple II.

BASIC as in the 1980s was a much simpler and unforgiving language than the programming languages of today. Some versions of BASIC only supported Integers and almost all limited variable names to only two significant characters. Lower-case letter were not supported. Features programmers take for granted today, objects, classes, protocols, constants, and named functions didn’t exist. The core features BASIC did have were strings, arrays, conditionals, loops, simple IO, and the ability to jump to any line in code by it’s number.

None of our modern tooling was available on an Apple II back then. Instead of an integrated development environment you had modes: immediate, editing, and running. Basic programs were written with numbered lines and you had plan out the construction of your code such that you left enough room to add lines between groups of statements or you were constantly renumbering your code. As the Apple II guide notes on page 51, “The simplest way to change a program line is to retype it.”

BASIC programming code

Debugging was particularly hairy. Programmers had only a handful of primitive tools. The TRACE command printed the code as it executed. The DSP command printed a particular variable every time its value changed. Whatever the MON command it, I never could figure out to work it properly. So like most hobbyist programmers of the day I used print statements littered through my code to check on the state of variables and order of execution of the subroutines. A simple and reliable technique that works to this day.

Like I said, I got so caught up in the complexity of programming an Apple II in BASIC that I never wrote a significant original program for that machine. (Later I would figure it all out but for the cheaper home computers of the 80’s with more advanced BASICS like the TI99/4a and the Commodore 64.)

Looking back on it, without modern programming languages and modern tools and most importantly without the web, YouTube, and Stack Overflow, I honestly don’t know how I learned to program anything. (But I did and where it took me is a story for another time.)

Today we have the opposite problem: Hundreds of programming languages and tools to choose from and hundreds of platforms upon which to program. Apple alone has four operating systems (macOS, iOS, tvOS, and watchOS) and supports three programming languages (C/C++, Objective-C, and Swift). Google has Android, Chrome, and Google Cloud on the OS side and Java, JavaScript, Python, Go, DART, and now Kotlin on the coding side. Microsoft, Facebook, and Amazon all have clouds, platforms, and a rich set of programming languages.

And then there are hundreds of communities centered around boutique programming languages. My favorites include Elm, LUA, and LISP. (By the way, it was LISP that taught me truly how to program. Learning LISP is the best thing you can do if you don’t have a computer science degree and you want to punch above your weight.)

In 1981 my problem was learning one language on one machine. In 2017 there are so many combinations of programming languages and platforms that it can seem like an O(n!) problem to sort through them all! Most engineers today need to learn JavaScript, HMTL, CSS, PHP, and SQL to program on the web, C++, Java, C#, or Go for hardcore backend services, and either Java or Objective-C to create native mobile applications. Plus it’s really important to understand several UNIX commands, a few scripting languages like Python or Perl, and tools like GIT, Xcode, Unity, Visual Studio, and Android Studio. At least I seem to need to understand the essentials of all of these in order to tackle the general sorts of programming challenges and opportunities thrown my way.

Yikes! 😣

In the last few years, the major players in the world of technology seem to be converging towards a programming language mean. While BASIC, LISP, and C++ were once very popular and are very different, the newer programming languages seem to be very similar.

JavaScript started this tread by adopting the features of strongly-typed, object-oriented, and functional languages, while keeping its boilerplate-free syntax. JavaScript has become the BASIC of the modern era–Easy to learn and widely available. JavaScript used to be a terrible programming language. One of my favorite books, JavaScript the Good Parts, spends most of its pages on the bad parts. But JavaScript is evolving rapidly with the ECMAScript 2016 standard, dialects like TypeScript, and platforms like Node.js.

Apple and Google seem to be noticing how powerful and yet accessible JavaScript is becoming. Instead of adopting JavaScript for their mobile platforms they are doing something almost as good: Creating and supporting JavaScript-like languages.

Apple started this trend a few years back with its surprise introduction of Swift. At first the Apple programmer community was a bit miffed. After decades of working with Objective-C and it’s highly idiosyncratic syntax, Apple seems to be abandoning billions of lines of code for a pretty but slow and immature language that had just sprung into existence unasked for and unneeded.

Except that something better than Objective-C was needed. The bar for programming Objective-C is very high. And it’s only used in the Apple universe. So it was hard to learn to code iOS apps and hard to find programmers that are expert in iOS apps.

Most importantly Apple rapidly evolved Swift, much to the horror of many engineering managers, so that the Swift 3.0 of today is an expressive general purpose programming language and a model for where JavaScript could go.

At Google IO, just a couple of weeks ago, Google, perhaps out of Apple-envy, surprised its programmer community by announcing “first-class” support of Kotlin. Until that announcement the Android world revolved around Java and C++. While Java and C++ are more mainstream than Objective-C they still represent a cognitive hurtle for mobile programmers and created a shortage of Android developers.

Kotlin, one of the many interesting JVM languages, compiles to byte code and feels much more JavaScripty in its expression. Android programmers were already using Kotlin but with Google’s official blessing support in Android Studio’s and other tools is going make using Kotlin easier for beginners and experts alike.

So the web, Apple, and Google are converging on a programming languages that are similar but not exactly the same. Where are they going?

Here are three bodies of code. Can you spot the TypeScript, Swift, and Kotlin?

A

B

C

While these are not very sophisticated lines of code, they do show how these languages are converging. (A is TypeScript. B is Swift. C is Kotlin.)

In the above example the first line declares and defines a variable and the second line prints it to the console (standard output).

The keyword let means something different in TypeScript and Swift but has the same general sense of the keyword val in Kotlin. Personally, I prefer the way Swift uses let: it declares a variable as a constant and enforces the best practice of immutability. Needless mutability is the source of so many bugs that I really appreciate Xcode yelling at me when I create mutable variable that never changes. Unfortunately Kotlin uses val instead of let for the same concept. Let in TypeScript is used to express block-scoping (which this variable is local to the function where its is declared). Block-scoping is mostly built into Swift and Kotlin: They don’t need a special keyword for it.

Just from the example above you can already see that jumping between TypeScript (JavaScript), Swift, and Kotlin should be pretty easy for the average programmer and more importantly, code should be pretty sharable between these three languages.

So them why are JavaScript/TypeScript, Swift, and Kotlin so similar?

Because programming as a human activity has matured. We programmers now know what we want:

  • Brevity: Don’t make me type!
  • No boilerplate: Don’t make me repeat myself!
  • Immutability by default and static typing: Help me not make stupid mistakes!
  • Declarative syntax: Let me make objects and data structures in a single line of code!
  • Multiple programming styles including object-oriented and functional: One paradigm doesn’t fit all programming problems.
  • Fast compile and execution: Time is the one resource we can’t renew so let’s not waste it!
  • The ability share code with the frontend and the backend: Because we live in a semi-connected world!

There you have it. Accidentally, in a random and evolutionary way, the world of programming is getting better and more interoperable, without anyone in charge. I love when that happens.

A Short Note on Tolkien’s On Fairy Stories

I’ve been a fan of J.R.R. Tolkien’s Hobbits and Middle Earth since I first ran into Bilbo Baggins at a children’s library appropriately as a child. I remember falling in love with the tale and being somewhat disappointed that the other books of its type were not as deep or as rich as Professor Tolkien’s work. As I was a little kid, I could not articulate that disappointment. I only felt a general sense of something lost when the book was over, and no other book on the same bookshelf return it to me: not Wind In the Willows, not Charlie and The Chocolate Factory, and not even the Wizard of Oz. All great books but even a child I connected with some inner meaning inside the Hobbit that stood on its own and seems as real and primary as the mundane world in which I lived.

I’ve been on a quest, as it were, to find that something, which continues to this day. It’s present in all of Tolkien’s works, from the Lord of the Rings to the Silmarillion. I’ve found it in more recent works aimed at kids, including J.K. Rowling’s Harry Potter and Pullman’s His Dark Materials. But there is much more of that something in classic Science Fiction. Asimov’s Foundation has it. Gene Wolf’s The Book of the New Sun has it. Iain M. Banks’ Culture has an especially strong dose of it. The best works of Stephen King and Agatha Christie have it. So, this something is not constrained to Fantasy or Scifi. Even works of non-fiction, such as Yuval Noah Harari’s Sapiens, is strongly infused with whatever this something is. Heck, even philosophers like Plato, Wittgenstein, Heidegger, and Bernard Suits talk about it.

I’m really name checking here, but whatever this magic is it’s really quite rare, difficult to express, and almost impossible to capture. It’s a uniquely human product and shows ups unannounced and uncalled in works of art, musical compositions, theatrical productions, motion pictures, essays, and books of all flavors and genres.

This magic is a specific something, not a general feeling or vague concept. But you feel it in your bones while comprehending it with your head. It’s not the cheap tug of pity on your heart strings or the slight-of-hand of flashy conjuring trick. Artists, writers, doctors, lawyers, scientists, directors, actors, podcasters, and even computer programmers deliver it unreliability and it somehow speaks to us universally.

Personally, I believe the modern master of this magic something is Neil Gaiman and is present in everything he has ever written but no less or more so than his masterpiece American Gods. (I’m not sure yet if it’s presence in the TV series).

For me been impossible to define and yet instantly recognizable.

What is this mysterious, universal, sometime that I first glanced at as a child and I’ve been chasing ever since as a pastime. It’s mere escapism? It’s a misguided religious impulse? Is it a sense of meaning and inner coherence that the so-called real world of random punctuated equilibrium is sorely lacking?

Image how utterly stocked I was to find that Tolkien, the master himself, defined this magic something simply and plainly, in an essay entitled On Fairy-Stories written in 1939. An essay hadn’t read until this last week!

Before I go into a few key points that Tolkien makes in his obscure essay let me explain how I found it (or it found me).

I love a good podcast. And by good, I mean folksy, intimate, and long-winded. Since podcasting became a thing, thanks in large part to Sara Koenig’s Serial, many podcasts have become indistinguishable from mainstream radio productions. But these are not good podcasts. The quality of a podcast is inversely proportional to the its production values.

The great podcast are exemplified by Philosophize This!, The History of English Podcast, and The History of Rome.  Each of these is the efforts of lone amateur, a self-made wonk that a professional media outlet would never hire, using a home-made studio, for whom money is an urgent secondary concern, and for whom her primary concern can only be expressed by rambling on about her chosen topic out of passion and an imperative to share.

One such podcast is There And Back Again by Point North Media, which I’m pretty sure is the DBA of one guy: Alastair Stephens. There And Back Again is a deeply detailed look at Tolkien’s works, starting in chronological order, and planned to run for several years. Each podcast is a recording of a live online lecture where Stephens interacts with his fans on Twitter and YouTube while trying to get through his analysis of Tolkien’s works. Most of these podcasts are over an hour long and Alastair often spends 30 minuets discussing a single sentence! This untidiness is the hallmark of a great podcast: It’s here in the earnest solo confabulation of the caster that you get insights and gems that you would otherwise gloss over or never hear in more edited, polished production.

The brightest gem for me was Stephens exploration of Tolkien’s essay On Fairy-Stories. During four decades of loving Tolkien I didn’t know this essay existed. Remember, my search for the magical something that Tolkien brought to life inside me is a pastime not my mission. I’m not actively searching. Mostly, I’m filtering through the bits of information that pass under my nose in the normal course of living life and flagging anything that might smell of the something for later analysis.

You should stop now and read On Fairy-Stories. It won’t take long… (I linked it again so you don’t have to scroll up!)

So, what did Tolkien say about fairy-stories that helped me, and hopefully you, understand what this magic something is?

First, Tolkien reveals that the dictionary is no help in defining a fairy-story. This observation, I believe, is an admission that Tolkien is inventing the idea of the fairy-story, right here, in his essay. Before Tolkien, the fairy-story was a fairy tale and didn’t consistently contain the elements that Tolkien feels are essential. This is a pretty brilliant rhetorical technique of bending outer reality to inner concept by claiming the authorities and the collective culture of humanity didn’t get it right!

Second, Tolkien argues against the idea that fairy-stories are for children and that fairies in fairy-stories are little people. Tolkien assures us that children can gain something from fairy-stories in the same way adults can–by reading them and becoming enchanted. This idea of enchantment is different from the willing suspension of disbelief. Tolkien points out that you have to believe in the primary world for a secondary world to take hold. Note: there is a subtext in the essay that the primary world is the original, and that we, the thinking creatures of the primary world, are sub-creators, creating secondary worlds. And further, that this is humanity’s role in the cosmos because we have the power to create names. Tolkien was famously a Catholic, in a Anglican country, and it shows in his idea of primary and secondary worlds.

Tolkien wants his fairies, which he calls elves, to be as large of humans, because they are peers of humans. While he doesn’t point this out in the essay, humans, alone of all animals on planet earth, have no peers. We have only ourselves to talk to. Until the day we make contact with intelligent alien life, elves are as close as we can get to another species with a second opinion on the universe–even if we have to invent them in our imaginations!

Third, Tolkien goes on at some length to discuss the origin of fairy-stories and touches on the ideas of invention, inheritance, and diffusion. His main point is that a story always has a maker. He doesn’t mean owner, or even author. He means that at some point in time, some person first told that story which was then retold by other people, and changed in the telling, until we get to the fairy-stories we have today. The important point, for me, the recognition that fairy-stories are created, artificial objects. The critical something, that is at the heart of all true fairy-stories, was originally place there by a person. Fairy-stories are the products of people.

Fourth, Tolkien defends the fairy-story from the criticism that it is mere escapism. In a brilliant bit of reasoning Tolkien explains that those disapproving of fairy-stories are confusing them with entertainment that allow us to “desert reality”. Tolkien notes that it’s not fair to condemn the prisoner for “wanting to escape a prison”. Amen Brother! Plato, Wittgenstein, Heidegger, and Bernard Suits all agree with Tolkien on this point!

The human condition, which we all share, is the struggle with confinement of our culture, our class, our race, our gender, and our environment. The one problem poor and rich, liberal and conservative, all the races, and all the genders share is the fact that we are impression in temporary shells of flesh and the opinion we hold against each other (Sartre: “Hell is the other people”). The magic, mysterious, hidden, indefinable something, buried deep in the heart of a fairy-story, gives us a temporary escape into enchantment and the opportunity to bring a bit of that something back from the secondary world into the primary world. We get a chance to improve the human condition!

Fifth, Tolkien observes that the world of the fairy-story is just a reflection of our primary world. A reflection where mundane meaninglessness becomes strange and meaningful. Tolkien talks about the beauty of candlelight and ugliness of industrial age electric street lamps. I don’t quite agree with him as industrial age electric street lamps is now a whole esthetic (Steampunk) and not considered ugly. Tolkien and I generally part ways on this point, which is a present theme in all his books. that any technology more complex than 18th century farming equipment is unpleasant at best and an agent of evil at worst.

But I think there is a more profound insight here, that there is a platonic beauty that can only be discovered by looking at our primary world though the lens of a secondary world. Not a distortion but a focusing on the details that we’re taking for granted.

Tolkien expresses this idea of platonic beauty, which to mortal eyes is both lovely and dangerous, with the concept of Mooreeffoc. This term is new to me, but the concept is not. Moreeffoc is the word “coffeeshop” written on glass as seen from the wrong side.  It’s the idea that the fairy world is right in front of our noses, if only we could see it.

Sixth and finally, Tolkien invents a new term for the happy ending and explains that happy endings are crucial element of all fairy-stories. Tolkien’s word is Eucatastrophe, which means a joyous turn of events and is easily mistaken for Dues ex machine. In the Great Fairy-Stories, those written by Tolkien, Rowling, Gaiman, Freud, and Thomas Jefferson, the happy endings are not specifically happy, and cheap plot devices are not at work to bring about a sentimental conclusion.

When Frodo is about the toss The One Ring into the fires of Mount Doom, loses his battle with the Ring, and cannot bare to complete his mission, Gollum (spoiler alert), bites off Frodo’s finger and leaps, or falls, into the lava finally destroying the Ring.

The power of that eucatastrophe is that the War of the Ring has been lost at the final moment. All the struggle has been in vain. There is truly no turning back and no way for the cavalry to arrive. Evil has clearly and completely triumphed.

But then, after defeat has been realized, an unexpected and unasked for agent makes a supreme sacrifice and saves the day. Best of all, Gollum, is acting in character, he is not an obvious agent of good or the divine, and he has no intention of saving the day! He doesn’t have a change of heart of discover the good inside him. Gollum just wants his damn ring back!

In The Hobbit, Bilbo reframes from killing Gollum. Bilbo should kill Gollum. Gollum is scary, dangerous, sick, and twisted by evil. In a 21st century video game Gollum is exactly who it’s OK to kill. By all measures of human justice Gollum is the type of person you should kill–or at least lock up forever.

But Bilbo, this silly, almost powerless, hobbit doesn’t kill Gollum. Bilbo reframes, not because he isn’t in fear of Gollum, but because Bilbo is a hobbit, and hobbits don’t kill people. Hobbits are not heroes.

In the finale of the Lord of the Rings, this humble act of not killing, which seems foolish as Gollum a danger to Frodo, this is the act that saves the world.

And that is the something I first glimpsed as a child and now understand as an adult.

The ending of the Lord of the Rings is far from sweet. The hobbits and the world around them is saved but scarred and is eventually destined to fade. However, the core values of Middle Earth remain intact.

We’re all destined to fade. We can’t protect ourselves or our children completely from evil. We live a real, primary world, not a fairy-story or a utopia. But we can venture into secondary worlds, which in my book includes political theories and scientific models as well as works of fantasy, and bring something valuable back.

In this world, every day, a Bilbo is not killing a Gollum, and years later, at the last minute, a ring of power is destroyed and the world is saved. These moments are not usually televised and don’t make for viral new headlines:

“Father reframes from slapping son and later son remembers this and reframes from assassinating president in 30 years.”

“Memory of mother’s love prevents terrorist from blowing up building in city center 20 years laster.”

“Friend randomly calls friend on a Saturday night and prevents suicide planned for that evening.”

“Guy buys homeless man a cup of coffee at Penn Station and the probability of nuclear winter replacing global warming reduced by 1%,”

There are many names for these moments of empathy but I didn’t have a great name until eucatastrophe, a good disaster.

Swift Programming: Filtering vs For Loops

The current version 3.1 has come a long way from the Yet-Another-C-Based-Syntax of the 1.0 version of Swift.

One of the best features of Swift is how functional programming idioms are integrated into the core of the language. Like JavaScript, you can code in Swift in several methodologies, including procedural, declarative, object-oriented, and functional. I find it’s best to use all of them all simultaneously! It’s easy to become a victim of the law of diminishing returns if you try to stick to one programming idiom. Swift is a very expressive coding language and it’s economical to use different styles for different tasks in your program.

This might be hard for non-coders to understand but coding style is critical for creating software that functions well because a good coding style makes the source easy to read and easy to work with. Sometimes you have to write obscure code for optimization purposes but most of the time you should err of the side of clarity.

Apple has made a few changes to Swift that help with readability in the long term but remove traditional C-based programming language syntax that old-time developers like me have become very attached to.

The most famous example was the increment operator:

In modern Swift you have to write:

As much as I loved to type ++ to increment the value of a variable there was a big problem with x++! Most coders, including me, were using it the wrong way! The correct way for most use cases is:

Most of the time the difference in side effects between ++x and x++ were immaterial, except when it wasn’t and it created hard to track down bugs in code that looked perfectly ok.

So now I’m used to typing += to increment values even in programming languages where ++ is legal. (Also, C++ should rebrand itself as C+=1.)

Another big change for me was giving up for-loops for functional expressions like map, reduce, and filter. As a young man when I wanted to find a particular object in an array of objects I would loop through the array and test for a key I was interested in:

Nothing is wrong with this code—it works. Well, actually there is a lot wrong with it:

  • It’s not very concise
  • I should probably have used a dictionary and not an array
  • What if I accidentally try to change o or objects inside this loop?
  • If objects is a lengthy array it might take some time to get to 12345
  • What if there is more than one o with the id of 12345?
  • This for-loop works but like x++: it can be the source of subtle, hard to kill bugs while looking so innocent.

But I’ve learned a new trick! In Swift I let the filter expression do all this work for me!

In that single line of code o will be the first object that satisfies the test id == 12345. Pretty short and sweet!

At first, I found the functional idiom of Swift to be a little weird looking. By weird I mean it looks a lot like the Perl programming language to me! But I learned to stop being too idiomatic and to allow myself to express functional syntax as needed.

For you JavaScript or C programmers out there here is a cheat sheet to understanding how this functional filtering works:

  • let means o is a constant, not a mutable variable. Functional programing prefers constants because you can’t change them accidentally!
  • The { } represents a closure that contains a function and Swift has special syntactic sugar that allows you to omit a whole bunch of typing if the function in the closure is the last or only parameter of the calling function. (Remember in functional programming functions are first class citizen and can be pass around like variables!)
  • $0 is a shortcut for the first parameter passed to your closure. So you don’t have to bother with throw away names like temp or i,j,k,x, or y.
  • .first! is a neat way to get [0], the first element of an array. The ! means you know it can’t fail to find at least one element. (Don’t use the ! after .first unless you are 100% sure your array contains what you are looking for!)

I’m working on a new project, a game that I hope to share with you soon. The game itself won’t be very interesting. I find that I enjoy creating games more than I enjoy playing them so I’m not going to put too much effort in creating the next Candy Crush or Minecraft. But I will blog about it as I work thought the problems I’ve set for my self.

The Rise and Fall of Autocorrect

I’ve turned off “auto correction” on my iPhone and it’s a godsend. I still get predictive suggestions and spelling correction. But I no longer have to fight with autocorrect and end up with wrong but similar words in my emails and texts.

When the iPhone first arrived in eight years ago we needed autocorrect because we lost the keyboard. We were nervous anout the loss of physical targets for our thumbs to hit. Many early iPhone reviewers complained about the perils of “typing on glass” and I still see email signatures asking my forgiveness for the author’s use of a phone without buttons. 

After nearly a decade of glass typing my thumbs are well trained. I type almost as fast with two thumbs as I do with 10 fingers. Every once in a while I try one of these smart mini keyboard attachments and I’ve discovered I can’t type on a phone with real keys. Physical buttons sized to fit a modern smart phone form factor are just too cramped for my thumbs to fly like a virtuoso pianist.

Autocorrect has been slowing me down and embarrassing me for ages. It transforms “can’t” into “can” and non-western European names into insults. Autocorrect wants me to spell the name of my company, Viacom, in ALL-CAPS. I don’t understand that one. Maybe in world where Apple’s autocorrect text engine gets its data VIACOM is the correct way to type it. But not in my world. 

And that is the big issue with autocorrect. We each have our own style of spelling and grammar. These stylistic variation enrage the grammar police but give our text personality and nuance. Autocorrect enforces uniformity and hurts out ability to express our ideas in an idiomatic fashion that allows us to create personal and community languages. 

Humans are born as language creation machines. We develop new words that express our POV on both new ideas. We repurpose old words to express new concepts while referencing tradition. Autocorrect messes with our ability to say what we mean and mean what we say. 

I’ve been communicating without the mediation of Autocorrect for about a week now. I’m typing at about the same speed. I’m making less causal mistakes and “speaking” in my true voice. I’m not fighting with an annoying helpful AI trying guess at what I mean. The only downside so far is that my “I” are longer capitalized by magic. I had to relearn to tap the shift-key. 

— Typed without regrets on an iPhone with autocorrect disabled. All mistakes are my own. 

The Young President

 

I’m watching the Young Pope on HBO and I’ve been struck by the by the similarities to another man unexpectedly thrust into a position of power and world leadership. Yes, you got me right—our 70-year-old President Trump is acting like a young and inexperienced pope with a chip on his shoulder a mile long.

In the Young Pope, Jude Law plays Lenny, a delightfully terrifying iconoclast of the cloth. Early in the series the Cardinal Secretary of State discovers that Lenny, Pope Pius XIII, is a man of few carnal appetites. It’s won’t be easy to manipulate this young American with women or wine. This pope wants to exercise his power, clean up the priesthood, and bring God back into the center of the Church. Lenny isn’t warm and fuzzy. He’s not politically correct. He’s not here to entertain or comfort. This pope is on a mission.

Remind you of someone?

While not a perfect metaphor for the Trump Presidency the Young Pope exposes the tragedy and cruelty of the reformer. Both Washington and the Vatican are riddled with special interest groups, corruption, and actors acting in their own interests and not in the interests of the people they are accountable to. Both President Trump and Pope Pius XIII didn’t expect to be elected and it shows. They aren’t ready while at the same time aren’t willing to wait until they are ready to make important decisions. It’s as if both men fear they are imposters and need to prove their worth before they are found out.

The actions of President Trump and Pope Pius XIII have created grievous emotional suffering for the most vulnerable members of their “flocks.” President Trump seems to have missed the whole idea that America is nation of immigrants. Lenny seems to have missed the whole idea that the Church is a refuge from secular world and the absence of God. Both men want only the purest of the “faithful” to reside in their house. Both want outsiders to work much harder to get in.

Of course, the Young Pope is a television show. Nobody has actually suffered under Pope Pius XIII. While the Trump Presidency might seem like a reality show I don’t have to remind you that it’s all too real.

A Trump supporter, perhaps a member of the Alt-Right or just an average Conservative who wants America to be great again, might be justified in asking, “Well then, what is a reformer supposed to do? Sugar coat his speech? Drag out the process? Compromise?”

In a word, yes.

There is already enough instability and suffering in the world that both President Trump and the Young Pope would be more effective by wrapping that iron fist in a velvet glove. Change is tough. Abrupt change is recipe for anarchy. Both President Trump and the Young Pope are headed for a spiritual and earthly crisis. For Jude Law this might mean an Emmy Award. Unfortunately for President Trump, in the real world, this kind of drama is rewarded in a way that isn’t good for any of us.

I know it’s boring and undramatic but slow, steady, and compassionate reform is the only kind of reform that ever works. By the way, I’m not in personal agreement is any of President Trump’s executive orders or his world view. I just don’t want to world to come crashing down around our ears while President Trump figures out he’s not on a television set.

Our trade agreements, health care, and tax laws probably all need a bit of tweaking. Or a lot of tweaking. Unfortunately, it’s impossible to predict the impact of any particular change. I’m sure the Young Pope would agree that the road to hell is paved with good intentions.

I know the people who voted for Trump. In the small town in NJ where I grew up they were religious, down to earth, and only wanted a good job and good life. They didn’t ask for change they are tired of waiting while they watch their world fall part. Factories are closing, job are being distributed around the world, and a new wave of immigrants are setting up shop in strip malls.

But American is always changing. Two hundred years ago, slavery was in full force and the industrial revolution wiped out the craftsman. One hundred years ago, World War I was starting up and electricity and the radio we’re uniting the world into one global community. Nothing of what we today call “globalization” and “immigration” is new. It’s just all part of a trend in how humanity is organizing it’s self around technological progress.

I’m sure by the end of season one Pope Pius XII will realize that he’s only made things worse. That instead of restoring the Catholic Church to glory his hasty and not well-thought-out executive orders will have pushed it to the edge of ruination. This is not a spoiler. I ‘m still in the middle of the series. But I can see where the plot is going.

I hope President Trump has HBO.

 

North Star

Successful companies usually have a secret sauce. It could be an algorithm or an insight. But whatever that secret sauce is, it is used to create or disrupt a market.

Apple created the PC market when Steve and Steve figured out that affordable pre-built personal computers would be really useful for consumers. IBM disrupted the PC market that Apple built with the insight that a standard, expandable, business-oriented PC would be especially valuable to businesses. After a while Microsoft disrupted the disrupter with the key insight that PC resources, CPU speed, RAM size, and disk space, were essentially infinite according to Moore’s Law.

Yet secret sauce alone is not enough create or disrupt a market for very long. You might have a brilliant algorithm or insight but if you can’t focus on it and deliver it to your audience then you got nothing.

Secret sauces are a common and cheap. The ability to focus and deliver is rare and expensive!

Let’s take the case of Google. Larry and Sergey started Google with the idea of Page Rank. They turned that idea into a set of algorithms and code and turned it loose on the web. Suddenly Larry and Sergey had the best search engine on the market.

But Page Rank on its own didn’t create Google. This might be hard to believe today but when started Google it was an underdog. Google was the epitome of a scrappy startup that hardly anyone paid any attention to.

Luckily Larry and Sergey had something else: A north star.

I don’t know if they called it a “north star”. That’s what we call it now. They probably didn’t call it anything. Looking back, I think Larry and Sergey, Like Steve and Steve, and all successful market creators/disrupters had an intuitive sense of focus and delivery that was superhuman. They got everyone around them, investors, employees, and partners, to focus on search and to think hard about the best way to deliver search to the consumer. They followed their north start to the detriment of everything else including sleep, civility, and revenue.

Obviously it paid off. Once the nascent search market was disrupted Google attained all the things they had sacrificed. They made money. They decided to be really nice. They got a good night’s sleep.

I see this pattern repeating though out the boom and bust cycle of business. When a company is following it’s north star it eventually becomes successful. When a company is distracted or tries to follow too many stars it eventually fails.

When I worked at Apple in the 90s our north start was summed up in the question, “will it sell more Macintoshes?” If you could answer “yes” then you had tacit approval to do it. Don’t ask. Just do it. HyperCard, QuickTime, TrueType, Unicode, these are all examples of technologies that “sold more Macintoshes.”

At the time I was working on ClarisWorks for Kids. It was a bit like Microsoft Office for the K-12 market. Our theory was that productivity software tools for kids would sell more Macintoshes (to parents and schools) and so I was asked to go and do it. I didn’t fill out a product plan or forecast revenue. I just convinced a group of engineers that ClarisWorks of Kids was cool and off we went. I hired as many people as I needed. I figured out features and even helped design the box art. Since I had a north star, I didn’t have to be managed. My boss was more like my personal coach. I went to him for advice and not orders.

Since I had never shipped a product before I made a few mistakes. I didn’t get fired. As long as I was following Apple’s north star everyone had trust and confidence in what I was doing. And I wasn’t special. I was one of hundreds of Apple engineering managers leading projects in partnership with hundreds of engineers all following a single north star.

ClarisWorks for Kids turned out to be a big hit. We won some awards. More importantly we sold a lot of Macintoshes. ClarisWorks for Kids was part of an  educational bundle that filled up computer classrooms across the world with Power PC-based Power Macs.

But then we turned away from our north star.

In the late 1990’s Apple’s marketshare continued to slip. In spite of all our focus and smart insights we were not sell enough Macintoshes. Risc chips, CD-ROMs, and built-in digital signal processors were not cutting it with the consumer. Most people bought IBM compatible PCs that ran Windows.

Instead of doubling down on our north star or discovering a new north star we at Apple decided to pursue many different strategies. Sometime we would follow multiple strategies at the same time but usually it was a new strategy every month. Some of these new “stars” included “Mac is the best PC” and “Let’s find more ways to make money from our existing users” and “Apple is really a software company!” Ouch. None of these stars become north stars. They were more like fly-by-night comets that burnout by dawn.

Without a strong north star, I no longer manage myself. I had to be told what to do. Once day I was told to “port Claris Works for Kids to Windows.” I asked how this project would “sell more Macintoshes?” Apparently Apple wasn’t concerned about that old idea any more and frankly I had not been asked for an opinion.

So we gritted our teeth and cracked open the Windows 3.1 disks and started porting. It was kinda of fun and a huge technical challenge as the Mac programming model was very different from Windows. So we dug into it. As an engineering manager there wasn’t as much for me to do so I got into project plans and status reports. I don’t think anyone read them. At some point we were done. ClarisWorks for Kids could now run under Windows on IBM PCs.

This is the point where we were all laid off. Nothing personal. Business was bad, new management was in town (Steve was back), and Windows software was not needed. It didn’t “sell more Macintoshes” because it didn’t run on a Macintosh.

After we were gone Apple got back in the business of following it’s original and true north star. Mac computers become exciting again with bold design and a new UNIX-based operating system. (OK an old UNIX-based OS but it brought the goodness of UNIX to a mass market.)

ClarisWorks and ClarisWorks for Kids were gone but Apple replaced them with a suite of productivity tools. Pages, Keynote, and Numbers are the great-grandchildren of ClarisWorks. I don’t know if they “sell more Macintoshes” but they have some cool features. Besides, Apple’s north star now is probably “Does it sell more iPhones?” or something like that.

These days I work really hard to provide a north star to my teams and to advocate for a north star in my organization. A good north star is easy to understand and easy to remember. A great north star enables employees to mange themselves and renders budgets and project plans obsolete. An awesome north star fuels growth and turns businesses around.

 

Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Telling Time as an Engineer

Time is the most precious resource. It’s in limited supply, once spent we can’t get it back, and you can’t trade it directly. This might sound a little radical but most global, national, business, and personal problems, seem to me, to boil down to problem of time and who’s time is more important than yours.

Before we can decide how we should spend the time given us, we have to put some thought into the process of analysis of tasks which take time. In software development a considerable amount of thinking has been applied to just this analysis. We usually call it “process management” and “time management”. Many methodologies have been created to solve the problem of time and yet when the rubber hits the road the management of time, which includes task prioritization and effort estimation, is full of errors and random results.

A great example is the Agile Development Processes, which has become the standard as well as declared dead by many of its original creators. Why is this?

Here is a simple example…

A high priority story is pulled from a general backlog and estimated, along with other stories, by an experienced engineering team. A product owner then weighs the cost of the stories based on the effort estimations and value to the business and feed them into a sprint backlog. The engineering team then works on each story, in order of priority, and completes the required stories by the end of the sprint. The work completed is demoed to the stakeholders and everyone is happy as everyone’s time has been well spent.

Well, except this happy plan almost never happens.

Something like 80% of all feature and products are delivered late or not at all. And often when a feature or product is delivered its buggy enough that we regret delivering it on time if at all. I’m sure Samsung engineers are less concerned about deadlines these days and more concerned about taking the time to do their tasks with more quality. Blizzard has made a billion dollar business of never giving dates for games and missing them when they do. Facebook and Spotify just spring new feature on their users without any warning and kill bad ones before they spread beyond a small segment.

It my opinion successful tech companies don’t bother with time management and leave schedules and task estimates to unsuccessful tech companies. I’m not saying successful tech companies don’t do agile or create project plans. I’m saying these are more like historical accounts and data gathered for analysis than pseudo-predictive planning.

Why is task estimation so non-predictive?

The problem is that it’s impossible to know how long a task will take unless you have done exactly that task before. When I worked at Apple Computer (before it was just Apple) we said that in order to to understand how long a project would take you had to build it and then write the schedule.

This is why experienced engineering team is so important in effort estimation. If you get a group of engineers who are a bit long in tooth they can work together to pool estimates on work they have performed previously.

But much of the work of an experience engineering team is work they have never done before. Experience engineers tend to see everything though the lens of previous experience. The result is that effort estimates are inaccurate as they have mistaken a novel task for an nostalgic task. I can’t count the number times I have said, “I thought the problem was X and it would take Y story points, but the problem is really Z and I’m still doing the research so your guess is as good as mine.”

The fact that for novel work your guess is as good is mine is why startups of inexperience engineers succeed with problems that mature companies fail at. The boss says “This problem will take too long to get to market. Let’s just not do it. It’s a waste of time.” The boss also says, “Hey brilliant engineers, you didn’t deliver this product on time! You suck! You’re fired.” Both judgements are typical of mature companies where value has to be delivered every quarter and experimental failure damage previous reputations.

In a typical tech startup, or any kind of new business, if you did the estimates you would have never started down the path. But startups don’t care! They are labors of vision and love usually staffed by people without experience who don’t know better. A good startup certainly doesn’t worry about effort estimates or punish engineers for not being able to tell time.

My advice to any engineering team that needs to worry about time is as follows:

One

You need a mix of experienced and inexperienced engineers on the team. This doesn’t mean old and young as much as people who have done it before and people who have not. Mix your insiders with your outsiders. For example if you’re building a web app bring in a a few mobile devs to the sprint planning. And some interns. And listen to the outsiders. Engage in a real discussion.

Two

If someone in charge wants to know how long a novel task will take from just the title of the task, without any true discussion, walk away. You’re going to give them a wrong answer. By the way good estimates are rarely rewarded–they are expected! But bad estimates are almost always punished. An honest “I don’t know” is always better than “2-3 weeks” or “2-3 story points”.

Three

Remember there is no value in hitting the deadline without quality, performance, or value to the user. In fact I’m always a little suspicious of teams that never miss a deadline. Apps that crash or cutting scope to the point of no visible progress is a hallmark of teams that hit their deadlines. I’m not saying don’t work hard or try to hit your deadline just be tough about the result at the end of the schedule: Give it more time if needed!

Four

The problem with my advice is that everyone wants to know the schedule. So many other functions depend on the schedule. Releasing a product on time is critical to the business. So my final piece of advice, and you’re not going to like it, is let the business set the deadline. Instead of wasting everyone’s time upfront with a broken planning process to arrive at a deadline, get the deadline first and work backward with effort estimates. While time is limited the amount of time we spend on on a task is flexible. We work differently if we have 3 months, 6 months, or 12 months to accomplish a task. Ask any college kid how much time they put into their studies at the end beginning of the semester when time seems unlimited vs the end of the semester when time is in short supply.

Time is always in short supply.

Trolls Are USA

It’s clear that Americans are more divided than ever. Our self-segregating tendencies have been reinforced by the adoption of Internet technologies and algorithms that personalize our newsfeeds to the point that we walk side-by-side down the same streets in different mental worlds.

Before the web, before iPhone, Netflix, and Facebook, the physical limits of radio, television, and print technology meant that we had to share. We had to share the airwaves and primetime and the headlines because they were limited resources.

In the pre-Internet world print was the cheapest communication to scale and thus the most variable. Anyone with a few hundred bucks could print a newsletter but these self-published efforts were clearly inferior to the major newspapers. You could tell yellow journalism from Pulitzer winners just by the look of the typography and feel of the paper in your hands. This was true with books and magazines as well. Quality of information was for the most part synonymous with quality of production.

To put on a radio or TV show you had to be licensed and you needed equipment and technical skills from unionized labor. Broadcast was more resource intensive and thus more controlled than print and thus more trusted. In 1938 The War of Worlds radio drama fooled otherwise skeptical Americans into believing they were under attack by Martian invaders. The audience was fooled because the show was presented not as a radio play but a series of news bulletins breaking into otherwise regularly scheduled programming.

The Broadcast technologies of the pre-social media world coerced us into consensus. We had to share them because they were mass media, one-to-many communications where the line between audience and broadcaster was clear and seldom crossed.

Then came the public Internet and the World Wide Web of decentralized distribution. Then came super computers in our pockets with fully equipped media studios in our hands. Then came user generated content, blogging, and tweeting such that there were as many authors as there were audience members. Here the troll was born.

Before the Internet the closest we got to trolling was the prank phone call. I used to get so many prank phone calls as high schooler in the 1970s that I simply answered the phone with a prank: “FBI HQ, Agent Smith speaking, how may I direct your call?” Makes me crack up to this day!

If you want to blame some modern phenomenon for the results of the 2016 presidential election, and not the people who didn’t vote, or the flawed candidates, or the FBI shenanigans, then blame the trolls. You might think of the typical troll as a pimply-faced kid in his bedroom with the door locked and the window shades taped shut but those guys are angels compared to the real trolls: the general public. You and me.

Every time you share a link to a news article you didn’t read (which is something like 75% of the time), every time you like a post without critically thinking about it (which is almost always), and every time you rant in anger or in anxiety in your social media of choice you are the troll.

I can see that a few of my favorite journalists and Facebook friends want to blame our divided culture, the spread of misinformation, and the outcome of the election on Facebook. But that’s like blaming the laws of thermal dynamics for a flood or the laws of motion for a car crash. Facebook, and social media in general, was the avenue of communication not the cause. In technology terms, human society is a network of nodes (people) and Facebook, Google, and Twitter are applications that provide easy distribution of information from node to node. The agents that cause info to flow between the social network nodes are human beings not algorithms.

It’s hard not to be an inadvertent troll. I don’t have the time to read and research every article that a friend has shared with me. I don’t have the expertise to fact-check and debunk claims outside of my area of expertise. Even when I do share an article about a topic I deeply understand, it’s usually to get a second opinion.

From a tech perspective, there are a few things Facebook, Google, and Twitter can do to keep us from trolling each other. Actually, Google is already doing most of these things with their Page Rank algorithms and quality scores for search results. Google even hires human beings to test and verify the results of their search results. Thus, it’s really hard for us to troll each other with phony web pages claiming to be about cats when dogs are the topic. Kudos to Google!

The following advice is for Facebook and Twitter from admiring fan…

First, hire human editors. You’re a private company not a public utility. You can’t be neutral, you are not neutral, so stop pretending to be neutral. I don’t care which side you pick, just pick a side, hire some college educated, highly opinionated journalists, and edit our news feeds.

Second, give us a “dislike” button and along with it “true” and “false” buttons. “Like” or “retweet” are not the only legitimate responses that human beings have to news. I like the angry face and the wow face but those actions are feelings and thus difficult to interpret clearly in argumentation and discourse. Dislike, true, and false would create strong signals that could help drive me and my friends to true consensus through real conversations.

Third, give us a mix of news that you predict we would like and not like. Give us both sides or all sides. And use forensic algorithms to weed out obvious trash like fake news sites, hate groups with nice names, and teenagers pretending to be celebrities.

A/B test these three ideas, and better ones, and see what happens. My bet is social media will be a healthier place but a small place with less traffic driven by the need to abuse each other.

We’ll still try to troll the hell out of each other but it will be more time consuming. Trolling is part of human nature and so is being lazy. So just make it a little harder to troll.

Before social media our personal trolling was limited to the dinner table or the locker room. Now our trolling knows no bounds because physical limits don’t apply on the Internet. We need limits, like spending limits on credit cards, before we troll ourselves to death.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀