JavaScript, Swift, and Kotlin Oh My!

 

Apple II User's Guide book cover

In the 1981 I cracked open my first real book on computer programming: Apple II User’s Guide by Lon Poole with Martin McNiff and Steven Cook. I still have it sitting in my bookshelf 36 years later. Previous to the Apple II User’s Guide I was playing around with typing-in game and program code from hobbyist magazines like Compute!

But now I felt ready to write an original program. I had no education in Computer Science, and I didn’t own an Apple II computer. But armed with the information in the Apple II User’s Guide I knew I was going create a original program. What that program would do was not as important to me the process of actually doing it. After several decades in software engineering, I still feel the same way. I don’t care much what my programs do (now we call them apps or services). I care very much how they are built and the processes and tools by which they are built.

Two of the chapters in the Apple II User’s Guide are about the computer itself and how to use it. The other six chapters and appendices A through L are all about programming. Which, at the time, made a lot of sense. Unless you had a business of some type, the main purpose of using a general purpose personal computer in the 80’s was programming it. Otherwise it was an or overpriced home gaming system.

In 1981, the main and as as far as I knew only, programming language was BASIC. The idea of BASIC is summed up by expanding it’s acronym: Beginner’s All-purose Symbolic Instruction Code. A simple, high-level programming designed for teaching newbies how to code. Unfortunately for me, BASIC didn’t do well what it was designed to do.

I read the Apple II User’s Guide from cover to cover. I highlighted passages on almost every page. But I never did write that original program for the Apple II.

BASIC as in the 1980s was a much simpler and unforgiving language than the programming languages of today. Some versions of BASIC only supported Integers and almost all limited variable names to only two significant characters. Lower-case letter were not supported. Features programmers take for granted today, objects, classes, protocols, constants, and named functions didn’t exist. The core features BASIC did have were strings, arrays, conditionals, loops, simple IO, and the ability to jump to any line in code by it’s number.

None of our modern tooling was available on an Apple II back then. Instead of an integrated development environment you had modes: immediate, editing, and running. Basic programs were written with numbered lines and you had plan out the construction of your code such that you left enough room to add lines between groups of statements or you were constantly renumbering your code. As the Apple II guide notes on page 51, “The simplest way to change a program line is to retype it.”

BASIC programming code

Debugging was particularly hairy. Programmers had only a handful of primitive tools. The TRACE command printed the code as it executed. The DSP command printed a particular variable every time its value changed. Whatever the MON command it, I never could figure out to work it properly. So like most hobbyist programmers of the day I used print statements littered through my code to check on the state of variables and order of execution of the subroutines. A simple and reliable technique that works to this day.

Like I said, I got so caught up in the complexity of programming an Apple II in BASIC that I never wrote a significant original program for that machine. (Later I would figure it all out but for the cheaper home computers of the 80’s with more advanced BASICS like the TI99/4a and the Commodore 64.)

Looking back on it, without modern programming languages and modern tools and most importantly without the web, YouTube, and Stack Overflow, I honestly don’t know how I learned to program anything. (But I did and where it took me is a story for another time.)

Today we have the opposite problem: Hundreds of programming languages and tools to choose from and hundreds of platforms upon which to program. Apple alone has four operating systems (macOS, iOS, tvOS, and watchOS) and supports three programming languages (C/C++, Objective-C, and Swift). Google has Android, Chrome, and Google Cloud on the OS side and Java, JavaScript, Python, Go, DART, and now Kotlin on the coding side. Microsoft, Facebook, and Amazon all have clouds, platforms, and a rich set of programming languages.

And then there are hundreds of communities centered around boutique programming languages. My favorites include Elm, LUA, and LISP. (By the way, it was LISP that taught me truly how to program. Learning LISP is the best thing you can do if you don’t have a computer science degree and you want to punch above your weight.)

In 1981 my problem was learning one language on one machine. In 2017 there are so many combinations of programming languages and platforms that it can seem like an O(n!) problem to sort through them all! Most engineers today need to learn JavaScript, HMTL, CSS, PHP, and SQL to program on the web, C++, Java, C#, or Go for hardcore backend services, and either Java or Objective-C to create native mobile applications. Plus it’s really important to understand several UNIX commands, a few scripting languages like Python or Perl, and tools like GIT, Xcode, Unity, Visual Studio, and Android Studio. At least I seem to need to understand the essentials of all of these in order to tackle the general sorts of programming challenges and opportunities thrown my way.

Yikes! 😣

In the last few years, the major players in the world of technology seem to be converging towards a programming language mean. While BASIC, LISP, and C++ were once very popular and are very different, the newer programming languages seem to be very similar.

JavaScript started this tread by adopting the features of strongly-typed, object-oriented, and functional languages, while keeping its boilerplate-free syntax. JavaScript has become the BASIC of the modern era–Easy to learn and widely available. JavaScript used to be a terrible programming language. One of my favorite books, JavaScript the Good Parts, spends most of its pages on the bad parts. But JavaScript is evolving rapidly with the ECMAScript 2016 standard, dialects like TypeScript, and platforms like Node.js.

Apple and Google seem to be noticing how powerful and yet accessible JavaScript is becoming. Instead of adopting JavaScript for their mobile platforms they are doing something almost as good: Creating and supporting JavaScript-like languages.

Apple started this trend a few years back with its surprise introduction of Swift. At first the Apple programmer community was a bit miffed. After decades of working with Objective-C and it’s highly idiosyncratic syntax, Apple seems to be abandoning billions of lines of code for a pretty but slow and immature language that had just sprung into existence unasked for and unneeded.

Except that something better than Objective-C was needed. The bar for programming Objective-C is very high. And it’s only used in the Apple universe. So it was hard to learn to code iOS apps and hard to find programmers that are expert in iOS apps.

Most importantly Apple rapidly evolved Swift, much to the horror of many engineering managers, so that the Swift 3.0 of today is an expressive general purpose programming language and a model for where JavaScript could go.

At Google IO, just a couple of weeks ago, Google, perhaps out of Apple-envy, surprised its programmer community by announcing “first-class” support of Kotlin. Until that announcement the Android world revolved around Java and C++. While Java and C++ are more mainstream than Objective-C they still represent a cognitive hurtle for mobile programmers and created a shortage of Android developers.

Kotlin, one of the many interesting JVM languages, compiles to byte code and feels much more JavaScripty in its expression. Android programmers were already using Kotlin but with Google’s official blessing support in Android Studio’s and other tools is going make using Kotlin easier for beginners and experts alike.

So the web, Apple, and Google are converging on a programming languages that are similar but not exactly the same. Where are they going?

Here are three bodies of code. Can you spot the TypeScript, Swift, and Kotlin?

A

B

C

While these are not very sophisticated lines of code, they do show how these languages are converging. (A is TypeScript. B is Swift. C is Kotlin.)

In the above example the first line declares and defines a variable and the second line prints it to the console (standard output).

The keyword let means something different in TypeScript and Swift but has the same general sense of the keyword val in Kotlin. Personally, I prefer the way Swift uses let: it declares a variable as a constant and enforces the best practice of immutability. Needless mutability is the source of so many bugs that I really appreciate Xcode yelling at me when I create mutable variable that never changes. Unfortunately Kotlin uses val instead of let for the same concept. Let in TypeScript is used to express block-scoping (which this variable is local to the function where its is declared). Block-scoping is mostly built into Swift and Kotlin: They don’t need a special keyword for it.

Just from the example above you can already see that jumping between TypeScript (JavaScript), Swift, and Kotlin should be pretty easy for the average programmer and more importantly, code should be pretty sharable between these three languages.

So them why are JavaScript/TypeScript, Swift, and Kotlin so similar?

Because programming as a human activity has matured. We programmers now know what we want:

  • Brevity: Don’t make me type!
  • No boilerplate: Don’t make me repeat myself!
  • Immutability by default and static typing: Help me not make stupid mistakes!
  • Declarative syntax: Let me make objects and data structures in a single line of code!
  • Multiple programming styles including object-oriented and functional: One paradigm doesn’t fit all programming problems.
  • Fast compile and execution: Time is the one resource we can’t renew so let’s not waste it!
  • The ability share code with the frontend and the backend: Because we live in a semi-connected world!

There you have it. Accidentally, in a random and evolutionary way, the world of programming is getting better and more interoperable, without anyone in charge. I love when that happens.

Swift Programming: Filtering vs For Loops

The current version 3.1 has come a long way from the Yet-Another-C-Based-Syntax of the 1.0 version of Swift.

One of the best features of Swift is how functional programming idioms are integrated into the core of the language. Like JavaScript, you can code in Swift in several methodologies, including procedural, declarative, object-oriented, and functional. I find it’s best to use all of them all simultaneously! It’s easy to become a victim of the law of diminishing returns if you try to stick to one programming idiom. Swift is a very expressive coding language and it’s economical to use different styles for different tasks in your program.

This might be hard for non-coders to understand but coding style is critical for creating software that functions well because a good coding style makes the source easy to read and easy to work with. Sometimes you have to write obscure code for optimization purposes but most of the time you should err of the side of clarity.

Apple has made a few changes to Swift that help with readability in the long term but remove traditional C-based programming language syntax that old-time developers like me have become very attached to.

The most famous example was the increment operator:

In modern Swift you have to write:

As much as I loved to type ++ to increment the value of a variable there was a big problem with x++! Most coders, including me, were using it the wrong way! The correct way for most use cases is:

Most of the time the difference in side effects between ++x and x++ were immaterial, except when it wasn’t and it created hard to track down bugs in code that looked perfectly ok.

So now I’m used to typing += to increment values even in programming languages where ++ is legal. (Also, C++ should rebrand itself as C+=1.)

Another big change for me was giving up for-loops for functional expressions like map, reduce, and filter. As a young man when I wanted to find a particular object in an array of objects I would loop through the array and test for a key I was interested in:

Nothing is wrong with this code—it works. Well, actually there is a lot wrong with it:

  • It’s not very concise
  • I should probably have used a dictionary and not an array
  • What if I accidentally try to change o or objects inside this loop?
  • If objects is a lengthy array it might take some time to get to 12345
  • What if there is more than one o with the id of 12345?
  • This for-loop works but like x++: it can be the source of subtle, hard to kill bugs while looking so innocent.

But I’ve learned a new trick! In Swift I let the filter expression do all this work for me!

In that single line of code o will be the first object that satisfies the test id == 12345. Pretty short and sweet!

At first, I found the functional idiom of Swift to be a little weird looking. By weird I mean it looks a lot like the Perl programming language to me! But I learned to stop being too idiomatic and to allow myself to express functional syntax as needed.

For you JavaScript or C programmers out there here is a cheat sheet to understanding how this functional filtering works:

  • let means o is a constant, not a mutable variable. Functional programing prefers constants because you can’t change them accidentally!
  • The { } represents a closure that contains a function and Swift has special syntactic sugar that allows you to omit a whole bunch of typing if the function in the closure is the last or only parameter of the calling function. (Remember in functional programming functions are first class citizen and can be pass around like variables!)
  • $0 is a shortcut for the first parameter passed to your closure. So you don’t have to bother with throw away names like temp or i,j,k,x, or y.
  • .first! is a neat way to get [0], the first element of an array. The ! means you know it can’t fail to find at least one element. (Don’t use the ! after .first unless you are 100% sure your array contains what you are looking for!)

I’m working on a new project, a game that I hope to share with you soon. The game itself won’t be very interesting. I find that I enjoy creating games more than I enjoy playing them so I’m not going to put too much effort in creating the next Candy Crush or Minecraft. But I will blog about it as I work thought the problems I’ve set for my self.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!

Code Markup in Xcode

Screen Shot 2016-05-28 at 12.58.13 PM

I’m working on a fairly large Swift project. Actually no, that’s not quite true. I’m working on a Swift project with a ViewController file that is getting disorganized and out of control. If this keeps up I might have a large project on my hands but right now it’s just a single file that is getting larger than I would like.

Apple provides some quick and dirty tools that make it easy to navigate a single file with specially formatted comments in your code. This functionality doesn’t provide automated documentation like Headerdoc. And that’s fine with me. I like how Headerdoc has become a mash up of Markdown and JavaDoc. My code is just not stable enough for documenting yet.

Happily Xcode’s built-in special comment parser is enough in the early stages of development to help me navigate a large file and remember where the bodies are buried.

Xcode supports the following out of the box:

  • MARK: (your text here)
  • MARK: – (section divider)
  • ???: Question
  • !!!!: Warning
  • TODO: Task
  • FIXME: Bug

Xcode’s special comments mark up the function navigation  pop-up menu so that you can find your questions, warnings, tasks, and bugs in your code without a overtaxing your the private neural network in your skull. Unfortunately you can’t add new special comments and they don’t show up in the Symbol Navigator.

(Using the MARK: comment you can simulate adding your own special comments. MARK: doesn’t add the word MARK: in front of navigation items in the way that the other special comments do (TODO, FIXME, etc.). So you can use MARK: NOTE to navigate to notes in your Swift code if that makes you happy.)

I use the following additional special comments to keep my code organized and consistent. (Xcode will just ignore them unless I prefix each with MARK:)

  • NOTE: (when the function name is not enough)
  • HINT: (a non-obvious reminder about a bit of code)
  • DBUG: (end of line comment marking code that probably should be removed eventually)
  • DEMO: (example usage)

It would be nice if Apple allowed us to personalize code markup in Xcode. But only after search and ranking in the App Store are fixed and a 1000 other higher priories are done!

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Binge Watching Handmade Hero

Screenshot (1)

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.

 

In Defense of Bubble Sort

Bubble sort is an algorithm with a very bad reputation. Robert Sedgwick, in Algorithms in C, notes that bubble sort is “an elementary sorting method” and “it does much more work” than it needs to. Donald Knuth is much more harsh when he states “the bubble sort seems to have nothing to recommend it, except a catchy name…” in the Art of Computer Programming.

Like a an actor valued only for his good looks bubble sort is an algorithm an enterprising coder should probably not admire.

Why is bubble sort so bad? Why does decent computer society advise young and impressionable software engineers avoid it? Why am I devoting a whole blog post to the ugliest sorting algorithms?

TIL: You can learn more from the flawed than you can from the flawless.

I’m not going to tell you why bubble sort is so bad. But before you google it why not try to figure out on your own?

The truth is that modern programmers don’t usually implement any standard sorting routines on the job—even the good ones! Generally there exists a collection class or library that is well written and well tested. If you have a mentor she will tell you not to bother with your own interpretations of the standard body of algorithms. That problem as been solved.

However, I think you’re missing out. Knowing now to implement well known algorithms can help you understand when to use them, what their strengths and weaknesses are, and how to talk about them. Like in a job interview.

Bubble sort is a great place to start. In order to understand why it’s so bad you have to understand big O notation, how algorithms are classified, and how the ordering of input data impacts the performance of an algorithm.

It’s my opinion that bubble sort is only the most terribad sorting method in the platonic world of absolute randomly unordered data. Bubble sort is actually very appropriate for error detection or for data that is already mostly sorted. You know, the kind of data that you are likely to run into in real life.

Identity used to sign executable no longer valid

The last thing I wanted to do on a Sunday morning is write a blog post about an an Xcode executable problem. What I had planned to do is test my most recent Swift 2.0 SpriteKit game on my iPad and iPhone. Last night I got a “Identity used to sign the executable is no longer valid” error when attempting to run my code on a real device. Since it was around midnight I took the message a notification that bedtime had arrived. Besides, a quick search on StackOverflow would surely solve the problem and if I got on SO now I would be up all night nosing around.

This morning I got the same message and found a post on SO that started  four years ago with two pages of answers: The identity used to sign the executable is no longer valid. It’s been viewed 66K times and covers many ancient versions of Xcode. The top answer simply said to restart Xcode. Indeed, restarting, rebooting, or re-installing is always a great answer! So I tried the first two (restarting Xcode and rebooting all my devices) but no joy. And it’s a cheap answer. 99% of computer problems are temporarily solved by powering down and up the server or device but the root cause sits like a malignant elf in the machine, biding it’s time, ready to strike again.

So I figured it out. My problem, in any case.

Last week I was giving a talk at SUNY Buffalo (shout out to Prof. Hartloff). It’s just far enough away from NYC that I had to stay overnight. I took a MacBook Pro that I don’t ordinarily use for development. When I was working on my new game and testing it my iPad and iPhone (to get actual frame rates and the feel of touching the screen) Xcode discovered that I didn’t have an iOS development certificate on that MacBook and asked me if I wanted to revoke my current cert or copy it over from another machine. Since I didn’t have it I said revoke. Xcode did what ever it does and created me a new iOS dev cert associated with that particular MacBook Pro.

Note to Apple: There has to be a better way for Apple certified developers to manage their certificates in this age of clouds and connectivity. Can’t these certs reside on Infinite Loop server?

Enough backstory!

If you get the dreaded “Identity used to sign executable no longer valid” error and restarting your Xcode doesn’t work here are the steps that should fix it for good.

Go to your Apple developer account certificate overview and read carefully and completely about how to manually manage certs and provision devices. Once you understand what you need to do it’s relatively simple.

  1. Revoke and delete all the could certs and profiles of devices you no longer own that have build up over the years. Clean it all up.
  2. Then, following the instructions from Apple recreate your iOS development and distribution certificates.
  3. Re-provision your iOS devices.
  4. Download your certs and provisioning files and reinstall them into your Mac’s keychain.
  5. Clean and build your app.
  6. Now it should run on the iOS devices you’ve provisioned nicely.

Note to You: Xcode is no longer managing your certs and profiles. But that’s OK. It was doing a bad job anyway.

Post Script

Why didn’t I post this info to Stack Overflow? Because this is a pretty radical solution, not without risk. SO, for better or worse, has been come the place for copy and paste solutions that have not aged gracefully over time. Don’t get me wrong–I love Stack Overflow, recommend it, and use it all the time. But sometimes it’s not safe to post an answer to a problem that requires reading comprehension.

Lucky for you and me, my unpopular blog post will probably be the last item in your search for solutions to apple certification problems.

Four Tips for Xcode Storyboard Users


Apple’s Xcode Storyboard is both your best friend and your worst enemy when it comes to developing state-of-the-art iOS, Mac OS X, tvOS, and watchOS apps. Sometimes, what would be really hard, like associating a function with a gesture is quick and easy. Sometimes, what should be easy, like toggling a property, requires hunting down a checkbox in an inspector that only shows up with the proper object selected.

Below are three common problems with Xcode Storyboards and what works for me to resolve them. Xcode Storyboard evolves with every release: these tips work for Xcode 7.1.1.

1. Is your Storyboard rendering as XML source code and not graphics?

Screen Shot 2015-11-23 at 9.56.10 AM

Somehow, someway Xcode magically switches the view of a storyboard from Interface Builder – Storyboard to Source Code. No matter, just secondary-click on the name of the storyboard in the Project Navigator and select Open As -> Interface Builder – Storyboard.

What is an Interface Builder? Back when the Mac OS X was the NextStep OS Interface Builder was a developer tool for creating views. This ancient app lives on the deep sub-basement of Xcode and sometimes unexpectedly appears.

2. Is your app blank in the simulator?

Screen Shot 2015-11-23 at 8.57.25 PM

It might be that you need to select your main view controller in Main.storyboard and set the “Is Initial View Controller” checkbox in the Attributes Inspector.

This usually happens when you have deleted the default view controller on a storyboard. You know, when you want to start over.

If the default storyboard is still around you can drag the Storyboard Entry Point arrow from the original view controller to point to your main view controller.

3. Having a hard time control-dragging between UI objects and the Document Outline?

Screen Shot 2015-11-23 at 9.13.06 PM

You’re not alone! You can use the Connections Inspector to drag-create connections without holding down the control key.

In the inspector just drag from the circle to the controller that you want to your UI object connected with.

Make sure you have the correct UI object on the storyboard and/or in the Document Outline selected so you connect the right things together.

4. Auto layout constraint values driving you batty?

Screen Shot 2015-11-23 at 9.14.46 PM

Setting up constraints is one of the most unintuitive parts of Xcode’s storyboards. Part of the problem is there a several ways to do it and Xcode doesn’t always seem to do what you ask it to do. Don’t worry! You can use the Document Outline to selection each individual constraint and adjust it’s values in the Size Inspector.

Generally I use the Align or Pin menus to initially set the auto layout constrains for an UI object.

Then I use the Resolve Auto Layout Issues menu to make the UI object conform to the initial constraint values with Update Frames.

Finally, since the UI object always looks weird I select each constraint in the Document Outline, adjust it in the Size Inspect, or delete it and start over.

There you go Xcoders! If I thinking of anything else I’ll update this post!