Nerd Fun Tech Trends

Mac Pro

Search for “Mac Pro” and you’ll get this article, You probably won’t be buying a Mac Pro this year, this video, Do I Regret buying the Mac Pro? 3 Weeks later.., and this Quora question, Is the New Mac Pro worth the price?

The conventional wisdom is that Mac Pro is expensive, for professionals only, over powered, and there are better options from Apple for consumers and business users.

I don’t agree. Don’t get me wrong, if you need a computer for today’s challenges, then these helpful explainers on the Internet have good points.

  • The Mac Pro is pricy if you all you’re doing is web browsing, emailing, and game playing.
  • The Mac Pro was definitely designed and built for professional video producers all the other professionals who need multiple CPU core and GPUs to get their jobs done.
  • The Mac Pro is hard to push to its limits. Its hardware and software are so well written and integrated that most of time the professionals see only a small percentage of their CPU and GPUs utilized.
  • There are better options for consumers, business people, and even software developers (like me). MacBook Pro, iMac, and even Mac Mini are powerful and well suited to the typical computation required by word processors, spreadsheets, image editors, and developer tools.

But I have a problem with all of the above. When I bought a Mac Pro, I didn’t buy it just for my today problems. I bought it for my tomorrow problems as well.

Because the Mac Pro is a workstation-grade computer that runs cool it’s going to last a long, long time. Heat and the build up of dust are the enemies of computer durability. Computation creates a lot of heat and that heat warps computer components. Heat also attracts particle in dust that start to stick to these components. I don’t know about you but my personal computer runs 24/7 (like I do). I don’t every want to turn it off because I’m always in the middle of two or three mission critical projects.

Because the Mac Pro is modular and design by Apple to be easy to upgrade it can be a computer for many different types of users. I’m not the kind of professional that is going to chew through 28 CPU cores and 1.5 terabytes of data (ordinarily). This is why I bought the entry level Mac Pro with 8 CPU cores, one GPU, and 1/4 quarter of a terabyte storage. Today, I’m a lightweight. Once in a while I edit a video or render a 3D model. Usually I write words, draw diagrams, present slides, and compile code. Tomorrow is another story. Maybe I’ll get into crypto or machine learning; Maybe I’ll get into AR or VR; I don’t like limits. I don’t like to buy computers with built-in limitations.

It is true that I am not pushing Mac Pro very hard at the moment. But Mac Pro is much faster than the Mac Mini I replaced. Geekbench says that a far less expensive Mac Mini is faster for single core work than an entry-level Mac Pro. I’m sure those benchmarks are true. But software doesn’t work with just a single core any more. Almost all modern software uses multiple threads of execution to save time and boost performance. Your web browser does this when loading a page and rendering images or paying video. Your word processor does this. Your developer tools do this. Everything I do with my Mac Pro happens faster than it did with my Mac Mini. I’m getting more done and spending less time waiting for files to load, images to render, and code to compile. Maybe its only 10% faster but over time that timesaving adds up.

It is true that I don’t use Mac Pro for every task. Sometimes I’m on the road (although not recently because of this virus situation) and a MacBook Pro is the only option. Sometimes iPhone or Apple Watch, or iPad Pro is the better option. But when the task requires me to sit for hours in the same position in the same room Mac Pro is the best option. Now that I have a Mac Pro I realize I was misusing my other computers. iPhones are not great for writing 70-page documents. You can do it but it’s not great.

Most of my life I felt had to go with the budget option. But I’ve always found the budget option to be barely worth it over the long run. If I keep this Mac Pro for five to ten years it will become the budget option. Otherwise, the budget option is to buy a cheap computer every 2-3 years. Over time the costs of those cheap computers start to add up to serious money.

Yes, it’s a risk to bet that Mac Pro will last for and still be relevant for five to ten years. Won’t we have quantum computers with graphene nanobots by then?

Maybe, but I (most likely) will still be using the same von Neumann type of computer in ten years that was I using ten years ago. I think most us will continue to use personal computers for work and play just as we will still need to type with our fingers and see images on a screen with our eyes.

Based on my analysis (see below) a Mac Pro gets less expensive over time as its upgrade components fall in price and the cost of a total replacement is avoided.

Mac Pro Cost Projection over 10 years
Mac Pro cost projection over 10 years vs. custom built PC and Dell

In the pass I’ve found I’ve needed a new computer every two years. Why? The applications I use get more sophisticated, the components become outdated, and there are security flaws that need to be addressed that the OS alone can’t fix. And sometimes the computer just freezes up or fizzles out. With Mac Pro I’m betting that instead of replacing it every two years I’ll be able to update it, as needed, and that Apple and the industry’s storage, memory, CPU, and GPU prices will continue to fall (Moore’s Law).

In 1987 I bought a Macintosh II for almost the same price that I paid for the Mac Pro in 2020. Like Mac Pro that Mac II was an expandable powerhouse. It helped launch my carrier in software development. It didn’t last me 10 years (it was not as upgradable and modular as Mac Pro) but I got a good five years out of it. It was a huge expense for me at the time but as time wore on it was completely worth it. Those were five years when I had a computer that could do anything I asked of it and take me, computationally speaking, anywhere I needed to go.

Nerd Fun

RAM Disk

Slow Processing

I’m writing a book. A “user guide” for a side project. This book is ballooning to 50+ pages. You would think that today’s modern work processors could handle 50+ pages with the CPU cores, RAM, and SSD drive space at modern desktop computer’s beck and call. That is what I thought. I was mistaken.

I started writing this book with Google Docs. After about 20 pages responsiveness became less than snappy. After about 30 pages the text insertion point (you might call it a cursor) become unaligned with the text at the end of the document.

This is not Google’s fault. Google Docs is a tour de force of HTML5 and JavaScript code that plugs into a web browsers DOM. It works amazingly well for short documents of the type that you would create in a homework or business environment. But my book is a tough cookie for Google Doc. I had subscripts and superscripts, monospaced and variable-spaced fonts. I had figures, tables, page breaks, and keep-with-next styling. In today’s modern WYSIWYG Unicode glyph word processing world it’s tough to calculate line lengths and insertion point positions the deeper into the document one goes.

So naturally I reached for my trusty copy of Microsoft Word. This is MS Word for Mac 16.35. I have been a proud owner of MS Word since the 1990 when I knew members of the Mac Word engineering team personally.

Word handled the typography of my now 60-page document without any WYSIWYG errors. But it was sweating under the heavy load of scrolling between sections, search and replace, and my crazy non-linear editing style. Word was accurate but not snappy.

I read that many writes prefer to use ancient DOS or UNIX-based computers to write their novels. Now I know why. I want the whole document loaded into memory at once. I need to fly through my document without speed-bumps or pauses as it’s chunks are loaded and unloaded from disk into RAM. But I also want typography turned on and accurate. I’m not writing a novel with only words painting the pictures in the reader’s mind. I writing a technical book about algorithms and I need to illustrate concepts that quickly become jargon salad without visual representation.

Fooling the Apps

Then a solution out the DOS and UNIX past hit me! I needed a RAM disk to accelerate Word. A RAM disk is a hard disk made not of spinning disk drive or even solid state drive but of pure volatile RAM!

There are several types of memory available to your operating system classified by how fast and reliable they are. Your CPU can access caches for popular instructions. Your apps can access physical and virtual memory for popular chunks of documents. Your operating system can access local and remote storage to load and save files. In modern computer systems tricks are used to fool apps and operating system into think that one kind of memory or storage is some other kind.

This is what a RAM disk is. It a kind of trick where the operating system mounts a volume as a normal hard disk but that volume is a temporary illusion. When you turn off your computer a RAM disk disappears like rainbow when the air dries up.

RAM disks are risky, because your computer could lose power at any moment, but they speed up applications like Word. Word was originally written in the days when memory was limited and typography was simple. Large documents could not fit into the RAM available. Word evolved to page parts of a document, that we’re not being used, in and out of memory behind the scenes, to make from for the part of the document being edited. This clever scheme made working on documents hundreds of pages long while displaying their contents with multiple styles and dynamic features like ligatures and spelling markup.

But why do I need to fool Word into thinking the disk it is running on is one kind of medium when it is another?

App Traditions

It’s been more than a decade since RAM got cheap and Unicode become standard. But most computer operating systems and applications are still written in the old paradigm of scarce memory and plentiful storage.

Most word processing mavens will tell you that hard disks are super fast these days and most computers have more RAM than they really need. And this is true, mostly. But try to get your apps to take advantage of those super fast disks and plentiful RAM! It’s not easy!

As a test I tried to use all 32 GB of RAM in my Mac mini. I loaded every app and game on drive. I loaded every large document and image. I loaded all the Microsoft, Apple, and Adobe apps. The closest I could get was 22 GB. There was this unapproachable 10 GB of RAM that I could not use. The operating system and all these app were being good collaborative citizens! They respectfully loaded and unloaded data to ensure that 10 GB was available in the case of a memory emergency. I had no way to tell these apps it was ok to be rude and pig out on RAM.

I had to fool them!

App Acceleration

To create a RAM disk in macOS you need to be familiar with UNIX and the Terminal. You don’t need to be an expert but this is not for the faint of heart. This GitHub Gist explains what you need to do. I created a 10 GB RAM disk with that unapproachable 10 GB squirreled away in my Mac Mini with the following command line:

diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://20971520 ` 

10 GB is enough to run most apps and their docs but not for the big AAA games or Xcode. 10 GB was more than fine for Word and my 60-page document.

10.75 GB RAM disk with MS Word and two docs.

The results have been amazing. Word rides my document like a Tesla Roadster as I jump around editing bits and bytes in my non-linear, unpredictable fashion.

After each editing session I just drag my documents to a safe location on my hard disk. I almost never need to reboot or turn off my Mac Mini. macOS Catalina has been rock solid for me. I’ve not lost any work and the RAM disk just hangs around on my desktop like a regular disk.

When I get around to it, I will write a script to create and load up the RAM disk and save the work with a short cut. This setup has been so stable that I’m not any hurry.

Now I want to test a Mac that with hundreds of GB of RAM. An iMac can be loaded up with 128 GB! A Mac Pro can handle up to 1.5 TB! A RAM disk might be a much bigger performance improvement than an SSD drive or a fast CPU with a dozen cores. And GPUs are not much help in processing text or numbers or even slides!

Nerd Fun Programming

Unit Tests Equal Awesome

I’m working on a hobby project iOS app that lets me track my comic book collection. I’m interested in comic books because all these super heroes from my misspent youth rule the world of popular culture. While the cool kids were playing sports and going to parties I stayed at home reading comic books. In college I stopped and found other things to do (computer programming, talking to humans, MTV). But now in the September of my life comic books are back and grip our imaginations tightly with their mutant powers.

I wanted to get back to the source. Where did all this cultural power come from? As I started buying physical comics again I realized I needed to track these objects of my affection on my phone. And I bet there are already dozens of apps that do this but I like to create my own tools.

Book Binder is the app and you’l find the code on GitHub.

Book Binder is an iOS app with a web backend. It’s an enormously long way from finished. I have lots of parts of it to figure out. The two current big problems are that comic book publishers can’t count and the number of comic books published is huge.

Comic book publishers can’t count!

Let’s take the case of Daredevil. One of my favorites as a teen and now a big show on Netflix. For reasons that are beyond comprehension (probably marketing) Marvel has restarted the numbering of the “man without fear” 6 times! Daredevil #1 was published in 1964, 1998, 2011, 2014, 2015, and 2017–and I don’t mean republished (that happens too). Daredevil #1 from 1964 is a completely different comic book from all the other Daredevil #1s in the five succeeding years! At one point Marvel tried to fix the problem with “legacy numbering” and that’s why the current series of DD started with #598 in 2017 instead of #29. I have no doubt in my mind that Marvel will start over with Daredevil issue #1 soon.

The other counting problem created by comic book marketing is variant issues with different covers. The most recent issues of Doctor Strange may or may not be published with different covers for the same issue. Collectors apply letters to each variant but Marvel doesn’t seem to have official variant designations. I have Doctor Strange #2 variant edition, legacy #392, second printing. I’m not sure how many variant editions were published or what the variant letter for each edition should be.

This counting (really identifying) problem makes it hard to come up with a good data structure for storing a comic book collection. I’m using a combination of a URI (unique resource identifier) and JSON (JavaScript Object Notation. This way I can easily share data between the iOS app and web server and with other comic book collectors, sellers, and buyers.

The number of comic books published is huge!

How many issues of Daredevil or Doctor Strange have been published since the 1960s? It’s hard to say. I estimate between 400 and 500 for Doctor Strange but I’m probably not including annuals, specials, team ups, side series, and all the variants. So let’s double that to 800 to 1000. And that’s the “master of the mystical arts” alone. If Marvel has around 200 books and DC has the same then we’re looking at a lower bound of 320K and an upper bound of 400K just for the two majors. Some of DC and Marvels comic books started in the 1930s and 1940s. If we include those and all the indy publishers (like Dark Horse) and all the publishers who have disappeared (like EC) then I’m going to estimate 1.6 million to 2 million unique comic books published in the USA. It’s really hard to say because it’s hard to know where to draw the line with publishers and if certain reprints should be included.

In any case I’m not going to be able to store more than a fraction of the millions of published comic book metadata representation in a phone. At best I can store a slice of this data locally and using any one of the big clouds to keep a shared catalog. I just want all this info to be quick to access, cheap to store, and easy to reconcile.

Testing an app for that

Let me tell you, creating an app, on my own, as a hobby project, is fun but hard. Like climbing a rock wall (which I would never personally do) you make a lot of false starts and have to retrace your steps trying to find a path forward.

This is where my unit test have helped. No, not helped. Made everything possible!

I started with three or four data structures. I’m testing out ideas and changing my mind as the idea do or don’t pan out. I’m not afraid to make large scale changes to my code because every function of every class has unit tests to make sure that if I break anything I can fix it.

Today I realized I had to take a big step back. I could not instantiate a comic book collection from a list of comic book URIs. I also realized I was storing state info in the comic book URIs which would not scale with millions of books to track. I finally realized that I had to enforce consistency in the formation of my comic book URIs (they all have to have four slashes). This way I could tell if a URI was mangled or incomplete.

I had to touch every one of my six major object that support my app… And I did! With Confidence. Once I removed state from my URIs and got all by unit tests to pass I fired up the app–and it worked fined. I had not added any bugs or broke any functionality. Whew!

If I didn’t have unit tests I’d be afraid to touch the code. I would be much more respectful of the code and I once I got some part of it to work I’d leave that part alone. As this is a lonely hobby project, I’d get stuck, give up, and move on to something easier.

Even with commercial software, with large teams of expert programmers, lack of tests and fear of changing the code, results in most software projects falling behind, abandoned, or just buggy.

I was sold on unit tests and Test Driven Development before and I’m resold every day I write code. I don’t care if you write the tests before or after the code that makes them pass (I do a bit of both). Just write the tests–especially if you are writing code for self-driving cars or robot military machines.

Nerd Fun Product Design Tech Trends The Future Uncategorized

Is It 1998 Again?

Set the Dial to 1998

Let’s power up the time machine and take a quick trip back to the wide world of tech around 1998. Microsoft was the Khaleesi of software and controlled a vast empire through Windows, Office, and Internet Explorer. Microsoft marched its conquering army of apps over the desktop and through the Internet with innovations like XMLHttpRequest, USB peripherals, and intelligent assistants.

All three of these innovations would go on to fashion the world we live in today with websites that look and feel like apps, devices that plug and play with our computers and phones, and helpful voices that do our bidding.

But back in 1998 these groundbreaking technologies were siloed, incompatible, and unintuitive!

  • You’d find that fancy web apps were usually tied to a specific browser (Walled Garden).
  • You’d buy a USB mouse and often find that it couldn’t wiggle your pointer around the screen (Standard Conformance).
  • You’d grow frustrated with Clippy (aka Clippit the Office assistant) because the only job it could reliably do was “Don’t show me this tip again.” (Poor UX).

And this is exactly where we are in 2018! Still siloed, incompatible, and unintuitive!

  • Do you want to run that cool app? You have to make sure you subscribe to the wall garden where it lives!
  • Do you want your toaster to talk to your doorbell? Hopefully they both conform to the same standard in the same way!
  • Do you want a super intelligent assistant who anticipates your every need, and understands the spirit, if not the meaning, of your commands? Well, you have to know exactly what to say and how to say it.

Digital Mass Extinction

The difference between 1998 and 2018 is that the stakes are higher and the world is more deeply connected. Products and platforms like Apple’s iOS, Google’s Cloud IoT Core, and Amazon’s Alexa existed in 1998–they just couldn’t do as much and they cost a lot more to build and operate.

In between 1998 and 2018 we had a digital mass extinction event-The dot com bubble burst. I was personally involved with two companies that didn’t survive the bubble, FlashPoint (digital camera operating system) and BitLocker (online database application kit). Don’t even try to find these startups in Wikipedia. But there are a few remains of each on the Internet: here and here.

Today, FlashPoint would be reincarnated as a camera-based IoT platform and BitLocker would sit somewhere between iCloud and MongoDB. Yet the core problems of silos, incompatibility, and lack of intuitive control remain. If our modern day apps, IoT, and assistants don’t tackle these problems head-on there will be another mass extinction event. This time in the cloud.

How To Avoid Busting Bubbles

Let’s take a look at the post-dot com bubble burst world for some clues on how to prevent the next extinction. After the startups of the late 1990s died-off in the catastrophe of the early 2000s the designers, developers, and entrepreneurs moved away from silos, proprietary standards, and complicated user experiences. The modern open standards, open source, and simplicity movements picked up steam. It became mission critical that your cool app could run inside any web browser, that it was built on battle tested open source, and that no user manuals were required.

Users found they could buy any computer, use any web browser, and transfer skills between hardware, software, and services. This dedication to openness and interoperability gave great results for the top and bottom lines. Tech companies competed on a level playing field and focused on who could be the most reliable and provide the highest performance and quality. Google and Netflix were born. Apple and Amazon blossomed.

Contrast that with the pre-bubble burst world of 1998 (and 2018) where tech companies competed on being first to market and building high walls around their proprietary gardens.

If we want to avoid the next tech bubble burst (around 2020?) we need Apple, Google, Amazon, and even Netflix to embrace openness and compatibility.

  • Users should be able to talk to Google, Siri, and Alexa in exactly the same way and get similar results (UX transferability).
  • Users should be able to use iOS apps on their Android phones (App compatibility).
  • Users should be able to share connected and virtual worlds such that smart speakers, smart thermostats, and augmented reality work together without tears (Universal IoT bus).

Google and Apple and Standards

At Google I/O last week the Alphabet subsidiary announced a few of examples of bubble avoidance features…

  • Flutter and Material Design improvements that that work as well on Android as they do on iOS.
  • AR “cloud anchors” that create shared virtual spaces between Android and iOS devices.

But sadly Google mostly announced improvements to its silos and proprietary IP.   I’m sure at the WWDC next month Apple announce the same sorts of incremental upgrade that only iPhone and Mac users will benefit from.

Common wisdom is that Apple’s success is build on its proprietary technology from custom chips to custom software. This is simply not true. When I was at Apple in the 1990s success (and failure) built on a foundation of standards, like CD-ROM, USB, and Unicode. Where Apple failed, in the long run, was where it went its own incompatible, inoperable, way.

In the 1998 the macOS was a walled garden failure. In 2018 macOS is a open source BSD Unix-based success. More than Windows, more than ChromeOS, and almost as much as Linux, macOS is an open, extensible, plug and play operating system compatible with most software.

The Ferris Wheel of Replatforming

Ask any tech pundit if the current tech bubble is going to burst and they will reply in all caps: “YES! ANY MOMENT NOW!!! IT’S GONNA BLOW!!!”

Maybe… or rather eventually. Every up has its down. It’s one of the laws of thermodynamics. I remember reading an magazine article in 2000 which argued that the dot com boom would never bust, that we had, through web technology, reached escape velocity. By mid-2000 we were wondering if the tech good times would ever return.

Of course the good times returned. I’m not worried about the FAANG companies surviving these bubbles. Boom and bust is how capitalism works. Creative destruction as been around as long as Shiva, Noah, and Adam Smith. However, it can be tiresome.

I want us to get off the ferris wheel of tech bubbles inflating and deflating. I want, and need, progress. I want my apps to be written once and run everywhere. I want my smart speaker of choice, Siri, to be as smart as Google and have access to all the skills that Alexa enjoys. I want to move my algorithms and data from cloud to cloud the same way I can rent any car and drive it across any road. Mostly, I don’t want to have to go back and “replatform.”

When you take an app, say a banking app or a blog, and rewrite it to work on a new or different platform we call that replatforming. It can be fun if the new platform is modern with cool bells and whistles. But we’ve been replaforming for decades now. I bet Microsoft Word has been replatformed a dozen times now. Or maybe not. Maybe Microsoft is smart, or experienced, enough to realize that just beyond the next bubble is Google’s new mobile operating system Fuchsia and Apple’s iOS 12, 13, and 14 ad infinitum…

The secret to avoid replatforming is to build on top of open standards and open source. To use adaptors and interpreters to integrate into the next Big Future Gamble (BFG). macOS is built this way. It can run on RISC or CISC processors and store data on spinning disk or solid state drives. It doesn’t care and it doesn’t know where the data is going or what type of processor is doing the processing. macOS is continuously adapted but is seldom replatformed.

To make progress, to truly move from stone, to iron, to whatever comes after silicon, we need to stop reinventing the same wheels and instead, use what we have built as building blocks upon which to solve new, richer problems, to progress.


Nerd Fun Programming

Emoji Tac Toe Opened Sourced

Happy Father’s Day!


To celebrate my 28th Father’s Day I’ve opened sourced Emoji Tac Toe. It’s actually not a big deal to anyone but me. It’s kinda of scary open sourcing code that you wrote alone and without first cleaning it up. But what the heck. If someone can learn something from this code, why keep it locked away. It’s already been on GitHub for a year. It’s not getting any prettier under lock and key.

You can find the source code at  And you can download the iOS app on the App Store at John Pavley > Emoji Tac Toe.

You can play Emoji Tac Toe on your iPhone, your iPad, and your Apple Watch. (As long as you are running iOS 9.3 or later.)

I guess I should chat a little bit about the code just in case you want to take a peek.


I plan on refactoring the code quite a bit. I want to basically refactor it so that the core is separate from the iOS implementation and I can port it easily to the web and to Android. Maybe Windows too. Who knowns! I’m going to start this process by adding unit tests and then by tearing it apart.


I plan on updating the code for iOS 11, including Swift 4 and ARKit. I’ve been meaning to add multiplayer over BlueTooth and MessageKit capabilities. I also want to complete the tvOS and macOS implementations.


The core code lives in the EmojiTicTacToe.swift file. Since there are more emoji than I can count I have cherry picked the 1100 that I wanted to include. This is still too many and I should cut it down further. It’s too many emoji because choosing which emoji to play with is difficult. I can’t use Apple’s keyboard user interface because I can’t restrict it to just showing emoji. And I don’t want to waste my time recreating Apple’s design. Also, this game is not about typing anything so a keyboard doesn’t make sense.

Instead I create an array of emoji and it works very well. iOS is great at dealing with Unicode.

Tic Tac Toe is an ancient game and simple. There only eight winning vectors. So, it’s easy to brute force and just check any board for the eight vectors.

As emoji are text it’s simple to translate a game board into a string and back. Interoperability with messaging and tweeting is free. This is why I love emoji! Rich graphics without the cost of image file management. Once day when operating systems allow custom emoji we’ll stop using PNGs and JPEGs altogether. On that day the web will be more fast and safe than ever!

Given the simplicity of the game, my AI is equally simple. When it’s the AI’s turn, I look for an open cell, look for a blocking move or look for a winning move using the eight winning vectors as my guide. Because tic tac toe is too easy to prevent absolute boredom I add a bit of random error into the AI’s thinking so that if the player is paying attention she can beat the machine.


ViewController.swift contains iPhone/iPad specific code.

I found I needed some iPad specific code to avoid a crash when presenting Apple’s standard share UIActivityViewController. I did not open a radar.

I handle several gestures that I’m sure my players never discover but they are there none the less:

  • A long press on an emoji can trigger an attack if battle mode is enabled. A few emoji will do cool tricks in battle mode. There are several battle mode strategy functions that implement these tricks. My favorite is youWin which lets the other player win.
  • Panning up and down turns sounds on and off. That should be a standard gesture for all games!
  • A shake starts are new game with a random pair of emoji. This is the best way to start a new game as choosing particular emoji is a pain.


NewGameViewController.swift contains the code for the game settings on the iPhone/iPad.

Originally, I had the iPhone and Watch Extension collaborate so that one could control the other. But the effort was not worth the reward. Now the two version are completely independent.

I use a  UIPickerView with two components to enable the player to choose two emoji. It’s not bad at all if there were only 20 or 30 emoji. But it’s just too much spinning to find a particular emoji out of 1100!

If the user tries to choose the same emoji for player 1 and player 2 (or the AI) I detect that and have the UIPickerView jump to the next emoji. See  ensureRowsAreUnique(component: row:).

To make finding a particular emoji a bit easier I allow the player to jump over groups of emoji in the  UIPickerView by tapping on the labels for each player. I’m guessing nobody would ever find this feature but the labels are colored blue to indicate they buttons.


InterfaceController.swift contains the code for a very simple version of Emoji Tac Toe that runs on watchOS. I actually like this version if the game best. No battle mode, no sound, no popovers, no choice of emoji. Just a single player game you tap out on your watch while waiting for the train.

Programming the UI for watchOS reminded me of my VisualBasic days! Each button view has it own handler function. No way to aggregate the touches and dispatch them with a switch statement!

Final Notes

All-in-all this code is pretty rough and need a lot of work. But it does work and hardly ever crashes. So that’s something. There is a half-finished tvOS implementation but I’m going to rethink it so don’t look at it!

I had to delete the sound effect that I didn’t create myself. Your build of Emoji Tac Toe will not sound like mine. But otherwise you are free, within the MIT License constraints, to do what you like with the code.

Nerd Fun Programming Tech Trends

JavaScript, Swift, and Kotlin Oh My!

This blog post now lives on  (and it’s much shorter and better!)


Nerd Fun Scifi The Future

Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how  far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Nerd Fun Programming

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class `Bool` property that you want to track between user sessions.

// inside some view controller class

var showAll = true

// inside viewDidLoad()

      if let savedShowAll = NSUserDefaults.standardUserDefaults().objectForKey("savedShowAll") {
          showAllStops = savedShowAllS as! Bool

// inside your action assocated a switch control

        showAll = !showAll
        NSUserDefaults.standardUserDefaults().setObject(showAll, forKey: "savedShowAll")

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored  value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value  might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save  the value
– Apparently setObject() never fails! 😀

Nerd Fun Tech Trends The Future

Faceless Phone

About twelve  years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event  I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that  souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting  fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject  me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours  of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it  features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with  my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants  will become  jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do.  But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

Nerd Fun Programming

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a  noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a  noun
  • clearNoun(): removes any data from a  noun and returns it to its original state
  • createNoun(): statically synthesizes a  noun from nothing
  • updateNoun(): updates the data that a  noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means  of generating an object and with  a semantic signal about the process of generation.

cleanNoun() returns a scrubbed  version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the  updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!