In the 1981 I cracked open my first real book on computer programming: Apple II User’s Guide by Lon Poole with Martin McNiff and Steven Cook. I still have it sitting in my bookshelf 36 years later. Previous to the Apple II User’s Guide I was playing around with typing-in game and program code from hobbyist magazines like Compute!
But now I felt ready to write an original program. I had no education in Computer Science, and I didn’t own an Apple II computer. But armed with the information in the Apple II User’s Guide I knew I was going create a original program. What that program would do was not as important to me the process of actually doing it. After several decades in software engineering, I still feel the same way. I don’t care much what my programs do (now we call them apps or services). I care very much how they are built and the processes and tools by which they are built.
Two of the chapters in the Apple II User’s Guide are about the computer itself and how to use it. The other six chapters and appendices A through L are all about programming. Which, at the time, made a lot of sense. Unless you had a business of some type, the main purpose of using a general purpose personal computer in the 80’s was programming it. Otherwise it was an or overpriced home gaming system.
In 1981, the main and as as far as I knew only, programming language was BASIC. The idea of BASIC is summed up by expanding it’s acronym: Beginner’s All-purose Symbolic Instruction Code. A simple, high-level programming designed for teaching newbies how to code. Unfortunately for me, BASIC didn’t do well what it was designed to do.
I read the Apple II User’s Guide from cover to cover. I highlighted passages on almost every page. But I never did write that original program for the Apple II.
BASIC as in the 1980s was a much simpler and unforgiving language than the programming languages of today. Some versions of BASIC only supported Integers and almost all limited variable names to only two significant characters. Lower-case letter were not supported. Features programmers take for granted today, objects, classes, protocols, constants, and named functions didn’t exist. The core features BASIC did have were strings, arrays, conditionals, loops, simple IO, and the ability to jump to any line in code by it’s number.
None of our modern tooling was available on an Apple II back then. Instead of an integrated development environment you had modes: immediate, editing, and running. Basic programs were written with numbered lines and you had plan out the construction of your code such that you left enough room to add lines between groups of statements or you were constantly renumbering your code. As the Apple II guide notes on page 51, “The simplest way to change a program line is to retype it.”
Debugging was particularly hairy. Programmers had only a handful of primitive tools. The TRACE command printed the code as it executed. The DSP command printed a particular variable every time its value changed. Whatever the MON command it, I never could figure out to work it properly. So like most hobbyist programmers of the day I used print statements littered through my code to check on the state of variables and order of execution of the subroutines. A simple and reliable technique that works to this day.
Like I said, I got so caught up in the complexity of programming an Apple II in BASIC that I never wrote a significant original program for that machine. (Later I would figure it all out but for the cheaper home computers of the 80’s with more advanced BASICS like the TI99/4a and the Commodore 64.)
Looking back on it, without modern programming languages and modern tools and most importantly without the web, YouTube, and Stack Overflow, I honestly don’t know how I learned to program anything. (But I did and where it took me is a story for another time.)
And then there are hundreds of communities centered around boutique programming languages. My favorites include Elm, LUA, and LISP. (By the way, it was LISP that taught me truly how to program. Learning LISP is the best thing you can do if you don’t have a computer science degree and you want to punch above your weight.)
In the last few years, the major players in the world of technology seem to be converging towards a programming language mean. While BASIC, LISP, and C++ were once very popular and are very different, the newer programming languages seem to be very similar.
Apple started this trend a few years back with its surprise introduction of Swift. At first the Apple programmer community was a bit miffed. After decades of working with Objective-C and it’s highly idiosyncratic syntax, Apple seems to be abandoning billions of lines of code for a pretty but slow and immature language that had just sprung into existence unasked for and unneeded.
Except that something better than Objective-C was needed. The bar for programming Objective-C is very high. And it’s only used in the Apple universe. So it was hard to learn to code iOS apps and hard to find programmers that are expert in iOS apps.
At Google IO, just a couple of weeks ago, Google, perhaps out of Apple-envy, surprised its programmer community by announcing “first-class” support of Kotlin. Until that announcement the Android world revolved around Java and C++. While Java and C++ are more mainstream than Objective-C they still represent a cognitive hurtle for mobile programmers and created a shortage of Android developers.
So the web, Apple, and Google are converging on a programming languages that are similar but not exactly the same. Where are they going?
Here are three bodies of code. Can you spot the TypeScript, Swift, and Kotlin?
let greeting = "Hello World";
let greeting = "Hello World"
val greeting = "Hello World"
While these are not very sophisticated lines of code, they do show how these languages are converging. (A is TypeScript. B is Swift. C is Kotlin.)
In the above example the first line declares and defines a variable and the second line prints it to the console (standard output).
The keyword let means something different in TypeScript and Swift but has the same general sense of the keyword val in Kotlin. Personally, I prefer the way Swift uses let: it declares a variable as a constant and enforces the best practice of immutability. Needless mutability is the source of so many bugs that I really appreciate Xcode yelling at me when I create mutable variable that never changes. Unfortunately Kotlin uses val instead of let for the same concept. Let in TypeScript is used to express block-scoping (which this variable is local to the function where its is declared). Block-scoping is mostly built into Swift and Kotlin: They don’t need a special keyword for it.
Because programming as a human activity has matured. We programmers now know what we want:
- Brevity: Don’t make me type!
- No boilerplate: Don’t make me repeat myself!
- Immutability by default and static typing: Help me not make stupid mistakes!
- Declarative syntax: Let me make objects and data structures in a single line of code!
- Multiple programming styles including object-oriented and functional: One paradigm doesn’t fit all programming problems.
- Fast compile and execution: Time is the one resource we can’t renew so let’s not waste it!
- The ability share code with the frontend and the backend: Because we live in a semi-connected world!
There you have it. Accidentally, in a random and evolutionary way, the world of programming is getting better and more interoperable, without anyone in charge. I love when that happens.