Is It 1998 Again?

Set the Dial to 1998

Let’s power up the time machine and take a quick trip back to the wide world of tech around 1998. Microsoft was the Khaleesi of software and controlled a vast empire through Windows, Office, and Internet Explorer. Microsoft marched its conquering army of apps over the desktop and through the Internet with innovations like XMLHttpRequest, USB peripherals, and intelligent assistants.

All three of these innovations would go on to fashion the world we live in today with websites that look and feel like apps, devices that plug and play with our computers and phones, and helpful voices that do our bidding.

But back in 1998 these groundbreaking technologies were siloed, incompatible, and unintuitive!

  • You’d find that fancy web apps were usually tied to a specific browser (Walled Garden).
  • You’d buy a USB mouse and often find that it couldn’t wiggle your pointer around the screen (Standard Conformance).
  • You’d grow frustrated with Clippy (aka Clippit the Office assistant) because the only job it could reliably do was “Don’t show me this tip again.” (Poor UX).

And this is exactly where we are in 2018! Still siloed, incompatible, and unintuitive!

  • Do you want to run that cool app? You have to make sure you subscribe to the wall garden where it lives!
  • Do you want your toaster to talk to your doorbell? Hopefully they both conform to the same standard in the same way!
  • Do you want a super intelligent assistant who anticipates your every need, and understands the spirit, if not the meaning, of your commands? Well, you have to know exactly what to say and how to say it.

Digital Mass Extinction

The difference between 1998 and 2018 is that the stakes are higher and the world is more deeply connected. Products and platforms like Apple’s iOS, Google’s Cloud IoT Core, and Amazon’s Alexa existed in 1998–they just couldn’t do as much and they cost a lot more to build and operate.

In between 1998 and 2018 we had a digital mass extinction event—The dot com bubble burst. I was personally involved with two companies that didn’t survive the bubble, FlashPoint (digital camera operating system) and BitLocker (online database application kit). Don’t even try to find these startups in Wikipedia. But there are a few remains of each on the Internet: here and here.

Today, FlashPoint would be reincarnated as a camera-based IoT platform and BitLocker would sit somewhere between iCloud and MongoDB. Yet the core problems of silos, incompatibility, and lack of intuitive control remain. If our modern day apps, IoT, and assistants don’t tackle these problems head-on there will be another mass extinction event. This time in the cloud.

How To Avoid Busting Bubbles

Let’s take a look at the post-dot com bubble burst world for some clues on how to prevent the next extinction. After the startups of the late 1990s died-off in the catastrophe of the early 2000s the designers, developers, and entrepreneurs moved away from silos, proprietary standards, and complicated user experiences. The modern open standards, open source, and simplicity movements picked up steam. It became mission critical that your cool app could run inside any web browser, that it was built on battle tested open source, and that no user manuals were required.

Users found they could buy any computer, use any web browser, and transfer skills between hardware, software, and services. This dedication to openness and interoperability gave great results for the top and bottom lines. Tech companies competed on a level playing field and focused on who could be the most reliable and provide the highest performance and quality. Google and Netflix were born. Apple and Amazon blossomed.

Contrast that with the pre-bubble burst world of 1998 (and 2018) where tech companies competed on being first to market and building high walls around their proprietary gardens.

If we want to avoid the next tech bubble burst (around 2020?) we need Apple, Google, Amazon, and even Netflix to embrace openness and compatibility.

  • Users should be able to talk to Google, Siri, and Alexa in exactly the same way and get similar results (UX transferability).
  • Users should be able to use iOS apps on their Android phones (App compatibility).
  • Users should be able to share connected and virtual worlds such that smart speakers, smart thermostats, and augmented reality work together without tears (Universal IoT bus).

Google and Apple and Standards

At Google I/O last week the Alphabet subsidiary announced a few of examples of bubble avoidance features…

  • Flutter and Material Design improvements that that work as well on Android as they do on iOS.
  • AR “cloud anchors” that create shared virtual spaces between Android and iOS devices.

But sadly Google mostly announced improvements to its silos and proprietary IP.  I’m sure at the WWDC next month Apple announce the same sorts of incremental upgrade that only iPhone and Mac users will benefit from.

Common wisdom is that Apple’s success is build on its proprietary technology from custom chips to custom software. This is simply not true. When I was at Apple in the 1990s success (and failure) built on a foundation of standards, like CD-ROM, USB, and Unicode. Where Apple failed, in the long run, was where it went its own incompatible, inoperable, way.

In the 1998 the macOS was a walled garden failure. In 2018 macOS is a open source BSD Unix-based success. More than Windows, more than ChromeOS, and almost as much as Linux, macOS is an open, extensible, plug and play operating system compatible with most software.

The Ferris Wheel of Replatforming

Ask any tech pundit if the current tech bubble is going to burst and they will reply in all caps: “YES! ANY MOMENT NOW!!! IT’S GONNA BLOW!!!”

Maybe… or rather eventually. Every up has its down. It’s one of the laws of thermodynamics. I remember reading an magazine article in 2000 which argued that the dot com boom would never bust, that we had, through web technology, reached escape velocity. By mid-2000 we were wondering if the tech good times would ever return.

Of course the good times returned. I’m not worried about the FAANG companies surviving these bubbles. Boom and bust is how capitalism works. Creative destruction as been around as long as Shiva, Noah, and Adam Smith. However, it can be tiresome.

I want us to get off the ferris wheel of tech bubbles inflating and deflating. I want, and need, progress. I want my apps to be written once and run everywhere. I want my smart speaker of choice, Siri, to be as smart as Google and have access to all the skills that Alexa enjoys. I want to move my algorithms and data from cloud to cloud the same way I can rent any car and drive it across any road. Mostly, I don’t want to have to go back and “replatform.”

When you take an app, say a banking app or a blog, and rewrite it to work on a new or different platform we call that replatforming. It can be fun if the new platform is modern with cool bells and whistles. But we’ve been replaforming for decades now. I bet Microsoft Word has been replatformed a dozen times now. Or maybe not. Maybe Microsoft is smart, or experienced, enough to realize that just beyond the next bubble is Google’s new mobile operating system Fuchsia and Apple’s iOS 12, 13, and 14 ad infinitum…

The secret to avoid replatforming is to build on top of open standards and open source. To use adaptors and interpreters to integrate into the next Big Future Gamble (BFG). macOS is built this way. It can run on RISC or CISC processors and store data on spinning disk or solid state drives. It doesn’t care and it doesn’t know where the data is going or what type of processor is doing the processing. macOS is continuously adapted but is seldom replatformed.

To make progress, to truly move from stone, to iron, to whatever comes after silicon, we need to stop reinventing the same wheels and instead, use what we have built as building blocks upon which to solve new, richer problems, to progress.

 

Surface Pro 3: Patience is Rewarded

I recently acquired a Surface Pro 3 during a Black Friday sale from a local Microsoft Store. I knew I was in for a challenge but I was up for it. I’m from the generation that witnessed the rise of personal computing in the late 20th century from clunky calculator-like boxes with tiny displays and obscure software commands to modern sleek slates of glass and metal that are all display and responsive to touch and voice. It’s been a fun ride and part of the thrill was trying to figure out how to get anything useful out of the ever evolving personal computer. Quite frankly I’ve been a bit bored with modern iOS and Android devices. Well designed and rapidly becoming indistinguishable, my iPad, iPhone, and Android phone pretty much work as expected and do no more and no less then their makers intend.

I’m also a little worried about the future of the general purpose personal computer. The biggest game changer in my life has not been the ability to play games, write blog posts, or edit movies on an affordable computer. It’s the ability to write computer programs that create games, blogging systems, and multimedia editors that has given me a community, confidence, and a livelihood.

PC sales have been and continue to be in decline. To fight this decline Apple and Microsoft are making their PCs more like tablets with keyboard: sealed boxes with safety belts and airbags that keep users from getting into trouble–like installing botnets or mining bitcoin.

In another five or ten years the general personal computer that can compile and link C code into a tool or application might be a thing of the past. Computer engineering might require expensive development systems and a University education. That’s were we started in the 1960s. The web, the apps, and the games that we use everyday have only been possible because kids, with little money or training, have been able to purchase a general purpose personal computer and start hacking around. For me, and millions like me, exploring the capabilities of a personal computer is like going on a hike. It’s fun and there is no other purpose then to do it. Software like Facebook, Flappy Bird, and FileMaker are just side effects.

So I bought a Surface Pro 3, which is a real PC that looks like a tablet, and put my MacBook Air aside. I’ve spent the last few weeks figuring it out. I’m not quite there but I’m having a blast as I try to relearn basic computer skills, discover its limits, and find workarounds for it’s bugs and so-called features!

Below are some of my field notes:

  • The Surface Pro 3 is a real computer and a tablet combined. It’s light enough to use as an ereader but powerful enough to code with. It’s also powerful enough to play serious games, edit images, and do anything a modern laptop can do. It’s not a desktop replacement but it’s close enough for me.
  • The Surface Pro 3 is still a work in progress. There are many good ideas but either they aren’t implemented well or they should be reworked. Let’s look at a few examples:
    • The Type Cover, which is a full keyboard and cover combined, is an awesome idea but feels flimsy on my lap, makes too much noise while typing, has to be physically connected, and is awkward when you don’t need it but want to keep it handy. The little track pad on the Type Cover is terrible, not needed, and makes the text cursor bounce all over the screen. Luckily you can turn it off! You won’t miss it!
    • The Windows 8.1 user interface works pretty well with a finger or a pen, but there are a few major problems when editing text with Google Chrome as the web browser. With the Type Cover connected a finger tap brings up the touch keyboard and obscures the lower third of the screen. The touch keyboard goes away on its own when you type on the Type Cover but it breaks your concentration.
    • Your finger is all you need if you are not drawing except when it comes to small icons and so-called left mouse clicks. I’ve got enough motor control that only the smallest of icons and buttons are inaccessible to my index finger but I can’t execute some left mouse clicks without a pen (or mouse) on the Surface Pro 3 with Google Chrome. Windows 8.1 maps a long-press to the left click but it doesn’t work for spell checking. As a terrible speller I need that popup menu of spelling corrections!
  • Windows 8.1 is a work in progress as well! It poorly combines the the user experience of Windows 7 with a touch interface. The results are confusing and inconsistent:
    • There are two system control panels and it’s not always clear where a setting will show up.
    • If you are not connected to a wireless network the Windows 7 part of the interface tells you that “no networking hard is detected”. But all you have to do is touch the little signal bars icon on the task bar and a list of wireless networks appear.
    • The Windows 8.1 start screen wants to replace the Windows 7 start menu. But the start screen feels like a disordered Mondrian painting. My advice to Microsoft: go back to the usability lab. Nobody uses Apple’s app launcher either. We use the task bar and the Finder.
  • The best feature of the Surface Pro 3 for the practicing coder is that you can install and run real development software like Node.js, Ruby, Git, Sublime, Vim, Emacs, C, and other UNIX-based tools. Many of these tools have Windows equivalents and others run well via Cygwin and Msys. Cloud 9, the web-based IDE for web apps also works fine with the Surface Pro 3 via Google Chrome. The HipChat client really needs a UI update but does it’s job so you can chat with your fellow engineers.
  • I’ve download the open source version of Microsoft’s Visual Studio and runs very well on the Surface Pro 3. I’m not a Windows developer (any more–the last time I developed with Windows was Windows 95!) but I’m impressed with Microsoft’s adoption of JavaScript as a primary programming language. I formally forgive Microsoft for JScript.
  • In my spare time I like to draw and paint with my computer and I’ve found that the Surface Pro 3 runs Abode Photoshop and Clip Studio Paint (Manga Studio) very well. It has a few minor problems distinguishing a resting palm from a touch but the pressure sensitive pen is as good as a Wacom tablet.
  • If you need to use the Microsoft Office, the Surface Pro 3 and Windows 8.1 is excellent at it. I know this isn’t cool but my favorite word processor is Microsoft Word. The Office apps simply don’t run well on a Mac and are missing important features. The one aspect of my MacBook Air that I don’t miss is struggling with Microsoft Word 2011!

So there you have it. If you enjoy a challenge and being different and have the patience to put up with some annoying bugs then the Surface Pro 3 might be for you. It’s more realistically usable than a ChromeBook but far from the antiseptic polish of a Macbook.

HyperCard: What Made it Leet

I posted a blog entry on HyperCard yesterday on The Huffington Post: HyperCard: The Original Bridge Over the Digital Divide. From the comments and tweets that I got it was pretty clear that us older hackers have fond memories of HyperCard. But there’s the rub–Us older hacker. Kids today, i.e., people in their twenties, missed out on the whole HyperCard phenomenon.

The children of HyperCard are ubiquitous! Every web browser, every presentation app, and even Apple’s iBook Author tool and Adobe’s AIR environment are Jar Jar Binks to HyperCard’s Qui-Gon Jinn.

But like Jar Jar himself, HyperCard’s children are flawed. All these apps and tools that link, animate, and script are over-specialized or over-complicated.  iBook Author is a great example. It’s a quick and easy (and free) way to create an app that runs on iPhone and iPad but only if you want that app to look and feel like a high school text book. Not sexy!

On the other end of the spectrum is RunRev’s LiveCode. It’s the most HyperCard-like of HyperCard’s successors, allows you to import HyperCard stacks (originally), uses many of the same stack and card metaphors, and provides a programming language very similar to HyperTalk. Unfortunately LiveCode has become a tool for creating serious desktop and mobile apps. It’s become as complex as Adobe Flash/Flex/AIR and almost as expensive. I have to give points to RunRev for keeping the dream alive and they deserve to generate a profit but it’s not about 10-year-old kids and school teachers making interactive lessons anymore.

Here’s my list of what made HyperCard great. I hope someone picks it up and runs with it.

  • You could do a lot but you couldn’t do everything with HyperCard. Limitations are a great way conserve complexity and keep doors open to the general public.
  • HyperCard was a general “software construction kit”. You could create a presentation but it wasn’t optimized for presentations or anything else. In its heyday HyperCard was used for everything from address books to to calculators to role playing games but it always felt a little amateur and frivolous.
  • HyperCard was free, preinstalled, and came with a great set of sample stacks. Some people just used the stacks as they came out of the box. Others noodled around with a bit of customization here and there. A few brave souls created new stacks from scratch. But it didn’t cost a dime (originally) and was easy to find.
  • HyperCard’s authoring tools were bundled with it’s runtime environment. Any stack (originally) could be opened and every script inspected. If a stack did something cool you could easily cut-and-paste that neat trick into your own stack.
  • HyperCard’s scripting language, HyperTalk, was very very English-like. More English like than AppleScript and 10,000 times easier for the amateurs and kids to master than JavaScript, Python, Ruby, or Processing. I’m sorry but “for (var i = 0; i < x; i++)” is just not as readible as “repeat with x equal to the number of words in line 1” to normal literate humans. (Python comes close, I give it props for trying.)
  • HyperCard stacks looked like HyperCard stacks. You could easily spot a stack in a crowd of icons and when it was running HyperCard stacks had their own visual language (originally). This visual identify made HyperCard authors feel like members of a special club and didn’t require them to be User Experience experts.

Limited, Free, Simple, Open, Accessible, Generalized, Frivolous, Amateur; The characteristics of greatness that also define what is great about the World Wide Web (originally).

Note: The careful reader might wonder why I’m clarifying my statements with the phase “(originally)”. Towards the end of its life HyperCard added complexity and locking and became something you had to buy. The parallels with the closed gardens and paywalls of today’s Inter-webs  are a bit uncanny.

Dungeonators Battle UI Redesign


There is nothing quite like real user feedback. The Dungeonators game that I started coding about a year ago has been through several design iterations. Before I wrote a line of code I mocked up the whole UI and tested that on my friends and kids (paper prototype, an honorable UI design tradition). And with each development build I tested everything again and even enlisted strangers. I must have played though the final release candidate a 100 times. (It was then that I realized that game programmers must get sick of their games if they properly test them!)

When I uploaded Dungeonators to the App Store on 14 October 2011 I was pretty confident about the game play and the user interface. Famous last words as they say 🙂

After an initial healthily growth curve Dungeonators installs tanked:


The message I get from this user adoption curve was simple: Dunegonators stinks!

So I went back to the drawing board to search for the stinky bits. After much reflection I realized three things:

  1. Dungeonators is too hard for casual users and too easy/dumb for hardcore gamers. People who play MMORPGs like World of Warcraft punch through my game. People who play Angry Birds get stuck around level 1.6. (Which is as far as you can go if you don’t know what you’re doing.)
  2. People don’t know what to touch. They want to touch the avatars and not the raid and spell frames. If you don’t know what raid frames and spell frames are then you are not going to get my game.
  3. I was going to have to fix this. I could fix this problem with a lengthy tutorial or FAQs. But Dungeonators is causal game not productivity software. I never read manuals and skip tutorials. I expect my audience to have the same level of self respect!
So here is the new battle UI that tries to clean this mess up:
  • The good guy raid frames (on the left) are no longer touchable: They just display status. I couldn’t find a casual user who knew what a raid frame was so I got rid of raid frames.
  • Good guy spell frames are no longer associated with good guy raid frames: Spell frames are now modeless and never hidden. Each good guy has two spells available 24/7. As the game progress the spell are automatically upgraded. I’ll have to rewrite the game mechanics to handle the fact that the total number of available spells has gone from 4 x 6 (which I understand is 24) to a mere 8. But that actually makes Dungeonators a heck of lot simpler to program and to play.
  • The bad guy raid frames are still touchable and still enable the player to switch targets. But in the original UI you could have separate bad guy targets for every good guy. In the revised UI all the Dungeonators are synchronized. It’s a gross simplification that is all for the best.
  • Touching the center of the screen, where the avatars live, is still not part of the game play but if you do, the game will pause and bring up the main menu. I was able to kill two birds with one stone: No main menu button and a valid response to a user touch. Feedback is everything thing: In the original design touching the center of the screen was ignored and could have been interpreted as the game freezing up.
In general I learned what I thought I always knew: iPhone games have to be simple and causal gamers don’t have the time or energy for complex mechanics. But in practice I learned a lesson that every battled hardened game developer must know after their first game is released: There is no better test case than the real world!

When Dogfooding Fails

For over 20 years we’ve been eating our own dog food in the software industry and it’s not working. For the uninitiated dogfooding means to actually use the product you’re developing. It started out as a radical idea at Microsoft and spread as a way to get developers to experience their customer’s pain. On the surface it was a very good idea–especially for an aging corporate culture divorced from its users. When I interviewed with Microsoft in 1995 I was told that all their engineers we’re given low-end 386 PCs. These PCs ran Windows 95 so slowly that the MS developers were incentivized to improve Windows’ performance to ease their own suffering. I don’t know about you, but I find that Windows is still pretty slow even in 2011 running on a really fast multicore PC. Clearly all this dogfooding is not helping.

So I’d like to frame an argument against dogfooding and in favor of something else: Plagiarism.

My argument goes like this:

  1. Dogfooding doesn’t work, or at least it’s not sufficient, because it’s not a good predictor of software success. Some software that is dogfooded is very successful. Most software that is dogfooded fails. (Most software fails and most software is dogfooded therefore dogfooding fails.)
  2. Dogfooding is really bad because it give you a false sense of doing something to improve your product: “It’s OK, I know our software is terrible but we’re forcing our employees to dogfood it and out of shear frustration they will make things better! Everyone go back to sleep…”
  3. Dogfooding reinforces bad product design. Human beings are highly adaptable (and last time I looked software devs are still considered human). We get used to things, especially in a culture where company pride and team spirit are valued (e.g. groupthink). Over time poor performance becomes typical performance. It starts to feel natural. Thus slow loading Windows operating systems become the gold standard for thousands of loyal Microsoft employees and customers. Instead of fixing the software we are fixed by it.

I believe that the urge to Dogfood is an emergent strategy of mature tech companies that want to rejuvenate their software development process. Management starts talking about Dogfooding when they realize the spark of creativity has gone out and they want to reignite it.

One of the reasons Dogfooding fails is that you never eat your own dog food in the beginning: The dog food didn’t exist yet. You had to get your inspiration from outside the company. Microsoft Windows was not the first OS with a graphical mouse-driven shell. At some point the Windows devs must have looked at the Apple Lisa and Macintosh computers for inspiration. And the Apple devs looked at the Xerox Star. And the Xerox devs drew their inspiration from the physical world: The first GUI desktop was modeled on an actual physical desktop. No dog food there.

So rather than dogfooding we should talking about plagiarism. If you want to make a great software product eat another a great software product and make it taste better–don’t eat yucky dog food.

Microsoft should force their devs to use the fastest computers running the best operating systems with really cool applications. I think they must have bought some MacBook Airs, installed Ubuntu and Spotify because Windows 8 looks pretty awesome 🙂