Categories
Uncategorized

Virus and Science

Like many, my life has been disrupted by this virus. Honestly, I don’t want to even acknowledge this virus. The only virtue of the Coronavirus is that should be widely apparent that we, humanity, are all in the same boat and that boat is fragile.

In the The World of the Worlds, written in 1872, HG Wells wrote about a technologically advanced species invading the earth and destroying its native inhabitants. No forces the earthlings could muster could stop the aliens and their machines. In the final hour, when all hope for the Earth was lost, the “Martians—dead!—slain by the putrefactive and disease bacteria against which their systems were unprepared; slain as the red weed was being slain; slain, after all man’s devices had failed, by the humblest things that God, in his wisdom, has put upon this earth.”

I just want to note that in the world of today we are the Martians. We are technologically advanced, bent on remaking the world, and yet somehow unprepared for the task.

I believe we are unprepared because our political, business, and cultural systems have not kept up with the advances of technical change. I do not believe we should go back to living like hunter-gatherers or the Amish (even the Amish get vaccinated these days). I do believe we should take a breath and catch up with our creations.

The Coronavirus was not created by technology (in spite of the conspiracy theories). Mother Nature is just doing what she always does, evolving her children and looking for opportunities for genetic code to succeed. This is evolution in action and we see it with antibiotic resistant bacteria and the rise of insulin resistance in modern humans. One is caused by how quickly microorganism evolve and the other is caused by how slowly macro-organisms evolve.

We have the science and technology to handle pandemics as well as antibiotic resistance and all the rest, but we have to listen to scientists and doctors. I know that sometimes, science and medicine seems to go against common sense, contradict long and deeply held personal beliefs, and has a habit of changing as new data comes in. This makes science and medicine vulnerable to ridicule, misuse, and misunderstanding.

If we start listening to scientists and doctors, instead of second guessing and villainizing them, species-level problems like pandemics, antibiotic resistance, and global warming will not go away, but we will be able to flatten their curves. If we don’t stop acting like science is just one of many sources of truth, even through we are mighty Martians, we will be felled under the weight of our own ignorance.

In The Age of Louis XIV Will and Ariel Durant wrote about the rise of science from 1648 to 1715, “Slowly the mood of Europe, for better or worse, was changing from supernaturalism to secularism, from the hopes of heaven and fears of hell to plans for the enlargement of knowledge and the improvement of human life.”

Are we stuck in the 17th century or can we move on and accept that we’re living in the 21st?

Categories
Uncategorized

XML and Immortal Docments

I just read Jeff Haung’s A Manifesto for Preserving Content on the Web. He made some good suggestions (seven of them) to help keep web content available as technical progress works hard to erase everything digital that has gone before.

I don’t know if everything published to the web deserves to be saved but much of it does and it’s a shame that we don’t have some industry standard way to preserve old websites. Jeff notes the that Wayback Machine and Archive.org preserve some content but are subject to the same dilemma as the rest of web–eventually every tech dies of it’s native form of link rot.

For longer than I care to admit (11 years!), I’ve been posting my own thoughts to my own WordPress instance. But one day WordPress or me will depart this node of existence. I’m considering migration to a hosted solution and something like Jekyll. That may well postpone the problem but not solve it. I could archive my words on a CD of some sort. But will my decedents be able to parse WordPress or Jekyll or any contemporary file format?

While I like the idea of printing PDFs to stone tablets from a perversity stand point what is really needed is a good articulation of the problem and a crowd sourced, open source solution.

Jeff’s first suggestion is pretty good: “return to vanilla HTML/CSS.” But what is version of HTML/CSS is vanilla? The original version? The current version? Tomorrow’s version? That is the problem with living tech! It keeps evolving!

I would like to suggest XML 1.1. It’s not perfect but its stable (i.e. pretty dead, unlikely to change), most web documents can be translated into it, and most importantly we have it already.

I know that XML is complex and wordy. I would not recommend XML for your web app’s config file format or build system’s make file. But as an archiving format I think XML would be pretty good.

If all our dev tools, from IDEs to blog editors, dumped an archive version of our output as XML, future archaeologists could easily figure out how to resurrect our digital expressions.

As an added bonus, an archive standard based on XML would help services like Wayback Machine and archive.org do their jobs more easily.

Even better, it would be cool if we all chip in to create a global XML digital archive. An Esperanto for the the divergent digital world! We could keep diverging our tech with a clear conscious and this archive would be the place for web browsers and search engines to hunt for the ghosts of dead links.

Now there are all sorts of problems with this idea. Problems of veracity and fidelity. Problems of spam and abuse. We would have to make the archive uninteresting to opportunists and accept some limitations. A good way to solve these type of problems is to limit the archive to text-only written in some dead language, like Latin, where it would would be too much effort to abuse (or that abuse would rise to the level of fine art).

What about the visual and audio? Well, it could be described. Just like we (are supposed to do) for accessibility. The descriptions could be generated by machine learning (or people, I’m not prejudiced against humans). It just has to be done on the fly without out human initiation or intervention.

Perfect! Now, everything time I release an app, blog post, or video clip, an annotated text description written in Latin and structured in XML is automagically archived in the permanent collection of human output.

Categories
Uncategorized

Mac Terminal App and Special Key Mapping

Mapping key presses in Apple’s macOS Terminal app

For fun I like to write command line applications in C using VIM. It’s like rolling back the calendar to a golden age before mice and OOP ruined everything. The discipline of writing and debugging a C99 program without a modern IDE’s firehose of autocompletion suggestions is like zen meditation for me. I have to be totally focused, totally present to get anything to compile!

Apple’s Terminal app is fine. There are other options, many of them awesome, but as part of this painstakingly minimal approach I just want to stick with vanilla. Even my vim.rc file is vanilla.

So far I’ve only run into one super annoying problem with Terminal and processing key presses with C99’s ssize_t read(int fildes, void *buf, size_t nbytes)!

Apple’s Terminal doesn’t send me some of the special keys by default. Specifically <PAGE UP> and <PAGE DOWN>. And I am betting that other, like <HOME> and <END> may have been overridden as well.

I need <PAGE UP> and <PAGE DOWN> to send read() the ASCII codes "<esc>[5~" and "<esc>[6~" respectively so I can pretend it’s 1983! (The original Macintosh was put on sale to the public in 1984 and after that it’s been all mice and OOP).

But there is a cure for Terminal!

Under the Terminal menu choose Preferences and click the Keyboard tab for the profile you are going use as a pre-GUI app shell. Press the tiny + button to “add key setting”. Select your special key from the key popup and make sure modifier is set to “none” and action is set to “send text”.

If you want tom map <PAGE UP> to its historically accurate function click into the input field and hit the <ESC> key. Terminal will populate the input field an octal escape code (\033).

So far this has been the hardest part of the exercise and why I wrote this blog post for posterity. If you need to remap key codes you probably know that <ESC> is \o33. You might think the letter o is the number 0 but then you have bigger problems if you are writing a C99 program like me.

Anyway, the rest of this exercise just involves normal typing of keys that turn into letters in the expected way!

Making <PAGE UP> send <esc>[5~ to STDIN_FILENO

This is all just bad behavior for so many reasons. What makes the Terminal beautiful it that works with ASCII codes that are both Integer values, Character values, and key codes. If you were to Click Now, you’d have a first hand information from graphic designers and coders on how these terminals can be made eye-appealing. These ASCII code describe the data and the rendering of the data on the screen. If Apple, anybody’s, Terminal app diverts a key code so that that read() can’t read it–well it’s a web browser that doesn’t conform to HTML standards.

You might be thinking: “Who cares about a terminal app in this age of 5G, Mixed Reality, Machine Learning, Cloud-based, Retina Displays?”

Under all our modern fancy toys are command line apps access through terminals. Your web server, your compliers, and your operating systems, are all administered through command lines.

For decades enterprise companies, including Microsoft, have tried to make the command line and ASCII terminals obsolete. They created GUI control panels and web-based admin dashboards. But you know what? They are all failures of various magnitudes–slow, incomplete, and harder to use than typing commands into an interactive ASCII terminal. Especially during a crisis, when the servers are down, and software is crashing, and the OS is hung.

OK, back to work on my C code. I bet I could run over one million instances of this ASCII terminal app on my off-the-shelf Mac Mini!

Categories
Uncategorized

Introduction to Scrum and Management (Part 1 of 3 or 4)

I just finished reading Scrum: the Art of Doing Twice the Work in Half the Time by Jeff and JJ Sutherland . Jeff Sutherland co-created Scrum in the 90s. JJ Sutherland is the CEO of Scrum Inc and works closely with his father. 

Prior to this, I’ve read the big thick technical tomes on Scrum, mostly published in the early 00s, and more blog posts than I care to admit. I’ve also practiced Scrum, at some level and in some form, for the last 15 years. I’ve adopted Scrum and adapted Scrum leading dev teams startups and large enterprises. But I’m not a Scrum master. I’m not trained or certified in Scrum. As is clear from Sutherland’s book, I’m the person in the role that Scrum wants to replace: the engineering manager. 

Even though the father-son team that wrote this less technical and more introspective Scrum book and run Scrum Inc have little use for managers like me, I like Scrum. I’ve seen amazing results from each Scrum practice that I’ve  supported. I was part of the management team at Spotify when we developed the famous Tribes, Squads, Chapters, and Guilds strategy of scaling an engineering culture. From my perspective, when Scrum works, it works well in every dimension. Developers and stakeholders are happy, work is visible and predictable, and products better fit their purpose. 

Curiously Scrum doesn’t like me and my kind—as a manager. And Scrum’s dislike is not unfounded: Most of the resistance to Scrum comes from management. As the Sutherlands note, even after a wildly successful Scrum implementation, it’s usually the managers who “pull back” from Scrum and return an org to “command and control”. I have personally not tried to claw back Scrum but I understand and sympathize with the impulse. 

In this series of blog posts I’m going to explore the relationship of managers and management to scrum masters and Scrum. I want to explore why Scrum left the management role out of the equation and why an elegant and powerful algorithm and data structure like Scrum is so hard to implement without destroying the aspects that make it work in the first place. Finally, I will give some tips to improve the Scrum process so that managers are not the enemy but rather the friend of Scrum.

Before we go I just want to point out that while I’ve read and watched plenty of blog posts, tweets, and YouTube Videos declaring that Agile is dead and that Scrum is Not an Agile Framework neither of these sentiments are true!

Agile and Scrum have problems, mostly because both were conceived with particular aspects of work culture ignored: like managers, governance, telecommunications, and large teams. Agile and Scrum were also cooked up before today’s highly mobile, remote-mostly, co-working culture became popular/possible. That Agile and Scrum have survived these transformations mostly intact points to the strength of these methods of human collaboration.

Agile is not dead and Scrum is a flavor of Agile. Let’s help them live up to their ideals!

Click here for Part Two

Categories
Uncategorized

Big O Primer

This is what happens when you don’t know Big O

Introduction

Big O is all about saving time and saving space, two resources that computer algorithms consume to do repetitive jobs (like sorting strings, calculating sums, or finding primes in haystacks). Big O analysis is required when data points grow very numerous. If you need to sort thousands, millions, billions, or trillions of data points then Big O will save you time and/or space, help you pick the right algorithm, guide you on making the right trade-offs, and ultimately be enable you to be more responsive to users. Imagine if Google Search told users to grab a cup of office while searching for the best coffee shop… madness would break out!

You could argue that Big O analysis is not required when filling a few dozen rows with fetched data in a web view. But it’s a good idea to understand why a naive user experience can create a backend scaling nightmare. Product Managers, Designers, and Frontend Devs will want to understand and be able to discuss why fetching a fresh list of every customer from the database for every page view a bad idea. Even with caching and CDNs Big O leads to better system design choices.

Properties

There are three points of view on algorithm complexity analysis:

  • Big O: upper bounds, worst case estimate.
  • Big Ω: lower bounds, best case estimate.
  • Big Theta: lower and upper, tightest estimate.

The distinctions are literally academic and generally when we let’s do a Big O analysis of algorithm complexity we mean let’s find the tightest, most likely estimate.

There are two types of algorithm complexity analysis:

  • Time complexity: an estimation of how much time source code will take to do its job.
  • Space complexity: an estimation of how much space source code will take to do its job.

Most of the time we worry most about time complexity (performance) and we will happily trade memory (space) to save time. Generally, there is an inverse relationship between speed of execution and the amount of working space available. The faster an operation can be performed the less local space your source code has to work with to do it.

CPUs have memory registers that they can access quickly but only for small amounts of data (bytes and words). CPUs also have access to increasing slower but larger caches for storing data (L1, L2, L3). You, as a coder, usually don’t worry about CPUs, registers, and level 1, 2, or 3 caches. Your programming language environment (virtual machine, compiler) will optimize all this very low-level time and space management on your behalf.

Modern software programs, which I will call source code, live in a virtual world where code magically runs and memory is dutifully managed. As a coder you still occasionally have to step in and personally manage memory, understand what data is allocated to the stack versus the heap, and make sure the objects your source code has created have been destroyed when no longer needed.

Modern software applications, which I will also call source code, live in a virtual world where apps magically run and storage is autoscaled. As a system designer you’ll have to choose your virtual machines and storage APIs carefully—or not if you let managed app services and containerization manage all your code and data as a service.

Usage

Big O will help you help you make wise choices with program design and system design. Big O, like other dialects of mathematics, is abstract and applies to wide range of software phenomenon. Using Big O to optimize your source code means faster web pages, apps, and games that are easier to maintain, more responsive to the users, and less costly to operate.

Big O reminds me of Calculus and Probability! It’s all about estimating the rate of change in how your source code processes data over time. Will that rate of change get slower with more data? That answer is almost always yes! Big O helps you identify bottlenecks in your source code, wasted work, and needless duplication. Big O does this through a formal system of estimation, showing you what your source code will do with a given algorithm and a given set of data points.

There are even more bottlenecks that will slow down your app even if your algorithms are optimized using Big O. The speed of your CPU, the number of cores it contains, the number servers, the ability to load balance, the number of database connections, and the bandwidth and latency of your network are the usual suspects when it comes to poor performance in system design. Big O can help with these bottlenecks as well so remember to include these conditions in your circle of concern!

Big O estimated are used to:

  • Estimate how much CPU or memory source code requires, over time, to do its job given the amount of data anticipated.
  • Evaluate algorithms for bottlenecks.

Big O helps engineers find ways to improve scalability and performance of source code:

  • Cache results so they are not repeated, trading off one resource (memory) for another more limited resource (compute).
  • Use the right algorithm and/or data structure for the job! (Hash Table, Binary Tree, almost never Bubble Sort).
  • Divide the work across many servers (or cores) and filter the results into a workable data set (Map Reduce).

Notation

Big O notation is expressed using a kind of mathematical shorthand that focuses on what is important in the execution of repeated operations in a loop. The large number should be really big, thousands, millions, billions, or trillions, for Big O to help.

Big O looks like this O(n log n)

  • O( … ) tell us there is going to be a complexity estimate between the parentheses.
  • n log n is a particular formula that expresses the complexity estimate. It’s a very concise and simple set of terms that explain how much the algorithm, as written in source code, will most likely cost over time.
  • n represents the number of data points. You should represent unrelated, but significant, data points with their own letters. O(n log n + x^2) is an algorithm that works on two independent sets of data (and it’s probably a very slow algorithm).

Big O notation ignores:

  • Constants: values that don’t change over time or in space (memory).
  • Small terms: values that don’t contribute much to the accelerating amount of time or space required for processing.

Big O uses logarithms to express change over time:

  • A logarithm is a ratio-number used to express the rate of change over time for a given value.
  • An easy way to think of rate-of-change is to image a grid with an x-axis and a y-axis with the origin (x = 0, y = 0) in the lower left. If you plot a line that moves up and to the right, the rate of change is the relation between the x-value and the y-value as the line moves across the grid.
  • A logarithm expresses the ratio between x and y as a single number so you can apply that ration to each operation that processes points of data.
  • The current value of a logarithm depends on the previous value.
  • The Big O term log n means for each step in this operation the data point n will change logarithmically.
  • log n is a lot easier to write than n – (n * 1/2) for each step!
  • Graphing logarithmic series makes the efficiency of an algorithm obvious. The steeper the line the slower the performance (or the more space the process will require) over time.

Values

Common Big O values (from inefficient to efficient):

  • O(n!): Factorial time!
    Near vertical acceleration over a short period of time. This is code that slows down (or eats up a lot of space) almost immediately and continues to get slower rapidly. Our CPUs feel like they are stuck. Our memory chips feel bloated!
  • O(x^n): Exponential time!
    Similar to O(n!) only you have a bit more time or space before your source code hits the wall. (A tiny bit.)
  • O(n^x): Algebraic time!
    Creates a more gradual increase in acceleration of complexity… more gradual than O(x^n) but still very slow. Our CPUs are smoking and our memory chips are packed to the gills.
  • O(n^2): Quadratic time!
    Yields gradual increase in complexity by the square of n. We’re getting better but our CPUs are out of breath and our memory chips need to go on a diet.
  • O(n log n): Quasilinear time!
    Our first logarithm appears. The increase in complexity is a little more gradual increase but still unacceptable in the long run. The n means one operation for every data point. The log n means diminishing increments of additional work for each data point over time. n log n means the n is multiplied by log n for each step.
  • O(n): Linear time!
    Not too shabby. For every point of data we have one operation, which results in a very gradual increase in complexity, like a train starting out slow and building up steam.
  • O(n + x): Multiple Unrelated Terms
    Sometimes terms don’t have a relationship and you can’t reduce them. O(n + x) means O(n) + O(x). There will be many permutations of non-reducible terms in the real world: O(n + x!), O(n + x log x), etc….
  • O(log n): Logarithmic time!
    Very Nice. Like O(n log n) but without the n term so it a very slow buildup of complexity. The log n means the work get 50% smaller with each step.
  • O(1): Constant time!
    This is the best kind of time. No matter what the value is of n is, the time or space it takes to perform an operation remains the same. Very predictable!

Analysis (discovering terms, reducing terms, discarding terms):

  • Look for a loop! “For each n, do something” creates work or uses space for each data point. This potentially creates O(n).
  • Look for adjacent loops! “For each n, do something; For each x, do something” potentially creates O(n + x).
  • Look for nested loops! “For each n, do something for each x”. This creates layers of work for each data point. This potentially creates O(n^2)
  • Look for shifted loops! “For each decreasing value of n, do something for each x”. This creates layers of work that get 50% smaller over time as the input is divided in half with each step. This potentially creates O(log n).
Categories
Uncategorized

Yet Another Book Binder Update

Hey! Who remembers that comic book manger app I was writing a few months ago?

Not me! Actually I didn’t forget about Book Binder–I just smacked into my own limitations. I had to take a break and do a bunch of reading, learning, and experimenting.

And now I’m back. Look at a what I did…

My problem with Book Binder was creating a well designed navigation view hierarchy and wiring it all together. And so I dug in and figured it out (with a lot of help from Ray Wenderlich and Stack Overviewflow. Thanks Ray, Joel, and Jeff!)

You’ll find the new codebase here: Comic Keeper

But let’s talk about view controllers, show segues, and unwind segues for a few minutes. In the image above I have tab bar controller as my root with three tabs. The 2nd and 3rd tab are trivial. It’s the first tab that is entertaining! I have a deep navigation view hierarchy with 8 view controllers linked by 15 show segues, 6 unwind segues, and 4 relationship segues.

What I like about storyboards and segues is Xcode’s visualization. It’s a nice document of how an iOS app works from main screen to deeply nested supporting screens. Once you learn the knack of it of it, control-dragging between view controllers to create segues is easy. Unfortunately creating a segue is the also the least part of the effort in navigating from one iOS view to another.

Given the number of screens and bi-directional connections I have in this foolish little app, navigation management consumes too much of my coding. And these connections are fragile in spite of all the effort UIKit puts into keeping connections abstract, loose, and reusable.

Let’s take a quick look at what have to do each time I want to connect a field from the EditComicBookViewController to one of the item picker view controllers:

  • Update EditComicBookViewController:prepare(for:sender:) method with a case for the new show segue. I need to know the name of the segue, the type of the destination view controller, and the source of the data I want to transfer into the destination. I have a giant switch statement to manage the transactions for each show segue. I did create a protocol, StandardPicker, to reduce the amount of boilerplate code generated by each show segue.
  • Update EditComicBookViewController unwind segue for each particular type of item picker I’m using. I have four reusable item pickers (edit, list, dial, and date) and four unwind segues (addItemDidEditItem, listPickerDidPickItem, dialPickerDidPickItem, datePickerDidPickDate). A function corresponding to each unwind segue is better than having one method for all show segues. But I still have conditionals that choose the EditComicBookViewController field to update based on the title I gave to the picker. This view controller/segue pattern is not really set up for reuse.
  • I have to create a show segue from the source view controller to the destination and an unwind segue from the destination back to source. This is all assuming these views are embedded in the same UINavigationViewController. Each segue needs it’s own unique ID and its pretty easy to confuse the spelling of that ID in the supporting code.

How could navigation be better supported in Xcode and UIKit?

First, I’d like to bundle together the show and unwind segues with the IDs and the data in a single object. I’m sure this exists already, as it’s pretty obvious, but Apple isn’t providing an integration for storyboards. Ideally, I would control-drag to connect two view controllers and Xcode would pop-up a new file dialog box to create a subclass of StoryboardSegue and populate it with my data. The IDs should be auto-generated and auto-managed by Xcode. That way I can’t misspell them.

Second, I’d like the buttons in the navigation bar to each trigger separate but standard events:

  • goingForward(source:destination)
  • goingBackward(source:destination)
  • going(source:destination)

Right now you have to build your own mechanism to track the direction of navigation in viewWillDisappear(). In the navigation hierarchy of every app there is semantic meaning to moving forward and backward through the hierarchy similar to but potentially different from the show/unwind segue semantics. Tapping the back button might mean oops! Get me out of here while tapping a done button might mean I’ve made my changes, commit them and get me out of here!

Third, I’d like Xcode’s assistant editor view to show the code for a segue when I click on a segue. The navigation outline and storyboard visualization is a great way to hop from view controller to view controller. While Xcode knows about the objects that populate controllers, it doesn’t navigate you to any of them. Clicking on a connection in the connections inspector should load the code for that connection in the assistant editor. Xcode does highlight the object in the storyboard but I want more.

As my apps get more ambitious and sophisticated I’m probably going to abandon storyboards like the Jedi Master iOS developers I know. That’s sad because I feel I’ve finally figured out how to wire-up a storyboard-based view controller hierarchy and I’m already leveling out of that knowledge as the Jedi Masters smile smugly.

Categories
Uncategorized

I love RPA! Fun and Easy!

A robot that a human wrote, wrote this post!

Categories
Uncategorized

FizzBuzz Still Hard… and Still Useless

I hear that many of the applicants we interview still have a hard time with FizzBuzz and other simple examples of looping, testing, and printing integers. This was true in 2007 when Coding Horror wrote this a famous blog post on Fizz Buzz, true in 2010 when DanSignerman famously asked StackOverflow about it, and true in 2017 when Hannah Ray answered the same question again on Quora. And today in 2019 I watched a video by Lets Build That App that explained how do solve FizzBuzz using fancy features of Swift 5.

So, yes, by external and internal validation FizzBuzz is still hard for software developers in an interview context to perform on demand.

In a quiet room with a clear set of requirements FizzBuzz is no problem. But under the pressure of an interview where the spotlight is on every misstep on the whiteboard FizzBuzz becomes something like the Great Filter of Software Interview Questions—Maybe we have not found intelligent life out in the stars because alien civilizations can’t pass the job interview!

Filtering numbers (so you can print “fizz” on multiples of 3 and “buzz” on multiplies of 5) is great for making sure you understand how language features work and that division by zero is bad.

Filtering numbers was probably a meaningful interview question in the 1980s and 1990s, when I learned to code, because it was a major pain with Assembly, C, and C++ (but not LISP). Today’s languages like Java, C#, Swift, Kotlin, and Python take all the challenge out of FizzBuzz.

The new Swift 5 language feature, isMultiple(of:) doesn’t even crash when you give it a zero! Where is the fun in that? And who knows if this advanced Swift switch statement with assignments as case expression is executing in constant time or blowing up the stack?

(Somebody knows all these things but software development is so routine now as most VMs and IDEs have airbags and baby-bumpers around your code.)

And yet many software developer interviews crash and burn on the FizzBuzz question! I think it’s because as team leads and employers we are rooted in the past. More and more code is being written from well engineered frameworks and yet we act is if knowing everything from first principles at the start of a developers career is critical for success. It’s not.

A good developer matures over time and has to start somewhere. There are better tests than FizzBuzz for filtering integers and engineers. It’s best to look for potential, the ability to learn, work well with others, and passion for coding, when interviewing candidates!

Categories
Uncategorized

iPhone to the Max

I’m on that Apple program that where you pay for an iPhone over time and you get the opportunity to update immediately when a new model comes along, as it does every year.

While this is a very good deal for Apple, almost like a subscription service, it’s a good deal for me too. I hate the feeling of FOMO that comes along when a new computer, phone, or device is released. But with iPhones (and Android phones) it’s more than just a feeling. Missing out on the latest phone means missing out on important new features, security protections, and performance improvements.

FOMO used to be a big problem for personal computers as there were big performance jumps between PC models back when Moore’s Law was still in full effect. These days you can still get great results from a 5 year old PC or MacBook. Maybe you can’t play VR games but you email and browse the web like a champ.

Smart Phones are in a different place on the product evolution curve than PCs. They still have a long way to go before they settle down. Innovation in smart phones is driven by advances in displays, cameras, custom chips, and machine learning. Even incremental improvements in these technologies means far better user experiences, security, and even more epic cat photos.

I’m super happy with the jump between the iPhone 8+ and the XS Max. It responds faster, is easier on the eyes with its 6.5″ screen, and yet is basically the same size as the 8+. The name is kinda of silly. But I don’t care what Apple names their phones.

At some point phone hardware will cease to evolve and some new device will become our “primary interface” to the Internet. My guess is that it will be a watch of some sort with accessory glasses or screens. But these things are hard to predict.

I’m on the train as I type this post into my phone and one guy is still reading a paper news paper. I guess that iPhone XS Max just didn’t excite him.

Categories
Nerd Fun Product Design Tech Trends The Future Uncategorized

Is It 1998 Again?

Set the Dial to 1998

Let’s power up the time machine and take a quick trip back to the wide world of tech around 1998. Microsoft was the Khaleesi of software and controlled a vast empire through Windows, Office, and Internet Explorer. Microsoft marched its conquering army of apps over the desktop and through the Internet with innovations like XMLHttpRequest, USB peripherals, and intelligent assistants.

All three of these innovations would go on to fashion the world we live in today with websites that look and feel like apps, devices that plug and play with our computers and phones, and helpful voices that do our bidding.

But back in 1998 these groundbreaking technologies were siloed, incompatible, and unintuitive!

  • You’d find that fancy web apps were usually tied to a specific browser (Walled Garden).
  • You’d buy a USB mouse and often find that it couldn’t wiggle your pointer around the screen (Standard Conformance).
  • You’d grow frustrated with Clippy (aka Clippit the Office assistant) because the only job it could reliably do was “Don’t show me this tip again.” (Poor UX).

And this is exactly where we are in 2018! Still siloed, incompatible, and unintuitive!

  • Do you want to run that cool app? You have to make sure you subscribe to the wall garden where it lives!
  • Do you want your toaster to talk to your doorbell? Hopefully they both conform to the same standard in the same way!
  • Do you want a super intelligent assistant who anticipates your every need, and understands the spirit, if not the meaning, of your commands? Well, you have to know exactly what to say and how to say it.

Digital Mass Extinction

The difference between 1998 and 2018 is that the stakes are higher and the world is more deeply connected. Products and platforms like Apple’s iOS, Google’s Cloud IoT Core, and Amazon’s Alexa existed in 1998–they just couldn’t do as much and they cost a lot more to build and operate.

In between 1998 and 2018 we had a digital mass extinction event—The dot com bubble burst. I was personally involved with two companies that didn’t survive the bubble, FlashPoint (digital camera operating system) and BitLocker (online database application kit). Don’t even try to find these startups in Wikipedia. But there are a few remains of each on the Internet: here and here.

Today, FlashPoint would be reincarnated as a camera-based IoT platform and BitLocker would sit somewhere between iCloud and MongoDB. Yet the core problems of silos, incompatibility, and lack of intuitive control remain. If our modern day apps, IoT, and assistants don’t tackle these problems head-on there will be another mass extinction event. This time in the cloud.

How To Avoid Busting Bubbles

Let’s take a look at the post-dot com bubble burst world for some clues on how to prevent the next extinction. After the startups of the late 1990s died-off in the catastrophe of the early 2000s the designers, developers, and entrepreneurs moved away from silos, proprietary standards, and complicated user experiences. The modern open standards, open source, and simplicity movements picked up steam. It became mission critical that your cool app could run inside any web browser, that it was built on battle tested open source, and that no user manuals were required.

Users found they could buy any computer, use any web browser, and transfer skills between hardware, software, and services. This dedication to openness and interoperability gave great results for the top and bottom lines. Tech companies competed on a level playing field and focused on who could be the most reliable and provide the highest performance and quality. Google and Netflix were born. Apple and Amazon blossomed.

Contrast that with the pre-bubble burst world of 1998 (and 2018) where tech companies competed on being first to market and building high walls around their proprietary gardens.

If we want to avoid the next tech bubble burst (around 2020?) we need Apple, Google, Amazon, and even Netflix to embrace openness and compatibility.

  • Users should be able to talk to Google, Siri, and Alexa in exactly the same way and get similar results (UX transferability).
  • Users should be able to use iOS apps on their Android phones (App compatibility).
  • Users should be able to share connected and virtual worlds such that smart speakers, smart thermostats, and augmented reality work together without tears (Universal IoT bus).

Google and Apple and Standards

At Google I/O last week the Alphabet subsidiary announced a few of examples of bubble avoidance features…

  • Flutter and Material Design improvements that that work as well on Android as they do on iOS.
  • AR “cloud anchors” that create shared virtual spaces between Android and iOS devices.

But sadly Google mostly announced improvements to its silos and proprietary IP.  I’m sure at the WWDC next month Apple announce the same sorts of incremental upgrade that only iPhone and Mac users will benefit from.

Common wisdom is that Apple’s success is build on its proprietary technology from custom chips to custom software. This is simply not true. When I was at Apple in the 1990s success (and failure) built on a foundation of standards, like CD-ROM, USB, and Unicode. Where Apple failed, in the long run, was where it went its own incompatible, inoperable, way.

In the 1998 the macOS was a walled garden failure. In 2018 macOS is a open source BSD Unix-based success. More than Windows, more than ChromeOS, and almost as much as Linux, macOS is an open, extensible, plug and play operating system compatible with most software.

The Ferris Wheel of Replatforming

Ask any tech pundit if the current tech bubble is going to burst and they will reply in all caps: “YES! ANY MOMENT NOW!!! IT’S GONNA BLOW!!!”

Maybe… or rather eventually. Every up has its down. It’s one of the laws of thermodynamics. I remember reading an magazine article in 2000 which argued that the dot com boom would never bust, that we had, through web technology, reached escape velocity. By mid-2000 we were wondering if the tech good times would ever return.

Of course the good times returned. I’m not worried about the FAANG companies surviving these bubbles. Boom and bust is how capitalism works. Creative destruction as been around as long as Shiva, Noah, and Adam Smith. However, it can be tiresome.

I want us to get off the ferris wheel of tech bubbles inflating and deflating. I want, and need, progress. I want my apps to be written once and run everywhere. I want my smart speaker of choice, Siri, to be as smart as Google and have access to all the skills that Alexa enjoys. I want to move my algorithms and data from cloud to cloud the same way I can rent any car and drive it across any road. Mostly, I don’t want to have to go back and “replatform.”

When you take an app, say a banking app or a blog, and rewrite it to work on a new or different platform we call that replatforming. It can be fun if the new platform is modern with cool bells and whistles. But we’ve been replaforming for decades now. I bet Microsoft Word has been replatformed a dozen times now. Or maybe not. Maybe Microsoft is smart, or experienced, enough to realize that just beyond the next bubble is Google’s new mobile operating system Fuchsia and Apple’s iOS 12, 13, and 14 ad infinitum…

The secret to avoid replatforming is to build on top of open standards and open source. To use adaptors and interpreters to integrate into the next Big Future Gamble (BFG). macOS is built this way. It can run on RISC or CISC processors and store data on spinning disk or solid state drives. It doesn’t care and it doesn’t know where the data is going or what type of processor is doing the processing. macOS is continuously adapted but is seldom replatformed.

To make progress, to truly move from stone, to iron, to whatever comes after silicon, we need to stop reinventing the same wheels and instead, use what we have built as building blocks upon which to solve new, richer problems, to progress.