Categories
Uncategorized

Hidden Bluetooth Menus in macOS Big Sur

Last night my magic keyboard developed a bad case of typing lag. As I was coding in Xcode I observed a huge delay (in seconds!) between pressing a key and its corresponding character appearing on the the screen.

IT Skills Activate

To diagnose and narrow down the problem (Xcode keyboard processing? A rogue process running on my Mac? Bluetooth bug in Big Sur? The keyboard itself?) I did the usual: Googled and looked for typing latency with various apps and keyboards. I isolated and consistently reproduced the problem to the Magic Keyboard. With a wired connection to the Mac there was no lag but over a Bluetooth connection the keyboard was stuttering!

This blog post by Scott Wezey (some guy on the Internet generous enough to share his experiences) seemed to help: https://scottswezey.com/2020/04/12/mac-bluetooth-lag/

Well, in full disclosure, the problem went away all by itself before I got to try any of Scott’s suggestions. I hate when that happens! There is a kind of uncertainty principle of hardware system debugging where just closely observing the problem makes it dematerialize. I’ve found that patiently disconnecting, reconnecting and then if that doesn’t work, rebooting (DR&B) makes 99% of all hardware problems run and hide. I suspect some cache was cleared or some quantum flux was flexed. What ever it was the problem was it is now solved or hibernating and I’m happily typing this blog post with normal latency.

Option-Click Treasure

But Scott did remind me that the macOS is a treasure trove of hidden menus! Holding option while clicking on the Bluetooth icon in the Mac menubar yields additional options! These options are generally features for hardware admins and power users. For example, the clicking on the Bluetooth menu yields a list of connected devices. Command, option, and shift clicking icons (in different key combinations) reveals different sets of features.

Clicking (with the primary mouse button) shows a short list of previously and currently connected devices (with battery level), the ability to toggle Bluetooth on and off, and a link to Bluetooth system preferences.

Option-clicking reveals the list with much more diagnostic info: MAC address, firmware version, Role (client/server), and RSSI (signal strength). With this info a good hardware admin can resolve problems with distance and configuration. Well, except that modern Bluetooth devices automatically configure themselves, so really all you can do is DR&B.

Option-shift clicking reveals even more: three utility commands that manually do the work of configuration: Reset, Factory reset, and remove all. Reset is basically DR&B. Factory reset returns the device to it’s just out of the box state. Remove all disconnects all connected bluetooth devices. This last option is a great way to sweep away the cruft of poorly connected Bluetooth devices that might be interfering with each other (or spying on you).

DR&B FTW

The moral of this tale is that when you’re experiencing Bluetooth issues do the option-shift click on the menubar icon if DR&B doesn’t work. You might find that a keyboard and mouse are in conflict or a ghost connection is haunting your Mac!

Oddly the Bluetooth system preferences doesn’t have the admin tools that option-shift clicking reveals. Maybe all this is in an Apple support manual. I can’t seem to find it!

I’ve started a GitHub repo to collect these hidden gems, not just for Bluetooth, but for everything that macOS provides. Please contribute with a pull request!

Categories
Nerd Fun Tech Trends

Mac Pro

Search for “Mac Pro” and you’ll get this article, You probably won’t be buying a Mac Pro this year, this video, Do I Regret buying the Mac Pro? 3 Weeks later.., and this Quora question, Is the New Mac Pro worth the price?

The conventional wisdom is that Mac Pro is expensive, for professionals only, over powered, and there are better options from Apple for consumers and business users.

I don’t agree. Don’t get me wrong, if you need a computer for today’s challenges, then these helpful explainers on the Internet have good points.

  • The Mac Pro is pricy if you all you’re doing is web browsing, emailing, and game playing.
  • The Mac Pro was definitely designed and built for professional video producers all the other professionals who need multiple CPU core and GPUs to get their jobs done.
  • The Mac Pro is hard to push to its limits. Its hardware and software are so well written and integrated that most of time the professionals see only a small percentage of their CPU and GPUs utilized.
  • There are better options for consumers, business people, and even software developers (like me). MacBook Pro, iMac, and even Mac Mini are powerful and well suited to the typical computation required by word processors, spreadsheets, image editors, and developer tools.

But I have a problem with all of the above. When I bought a Mac Pro, I didn’t buy it just for my today problems. I bought it for my tomorrow problems as well.

Because the Mac Pro is a workstation-grade computer that runs cool it’s going to last a long, long time. Heat and the build up of dust are the enemies of computer durability. Computation creates a lot of heat and that heat warps computer components. Heat also attracts particle in dust that start to stick to these components. I don’t know about you but my personal computer runs 24/7 (like I do). I don’t every want to turn it off because I’m always in the middle of two or three mission critical projects.

Because the Mac Pro is modular and design by Apple to be easy to upgrade it can be a computer for many different types of users. I’m not the kind of professional that is going to chew through 28 CPU cores and 1.5 terabytes of data (ordinarily). This is why I bought the entry level Mac Pro with 8 CPU cores, one GPU, and 1/4 quarter of a terabyte storage. Today, I’m a lightweight. Once in a while I edit a video or render a 3D model. Usually I write words, draw diagrams, present slides, and compile code. Tomorrow is another story. Maybe I’ll get into crypto or machine learning; Maybe I’ll get into AR or VR; I don’t like limits. I don’t like to buy computers with built-in limitations.

It is true that I am not pushing Mac Pro very hard at the moment. But Mac Pro is much faster than the Mac Mini I replaced. Geekbench says that a far less expensive Mac Mini is faster for single core work than an entry-level Mac Pro. I’m sure those benchmarks are true. But software doesn’t work with just a single core any more. Almost all modern software uses multiple threads of execution to save time and boost performance. Your web browser does this when loading a page and rendering images or paying video. Your word processor does this. Your developer tools do this. Everything I do with my Mac Pro happens faster than it did with my Mac Mini. I’m getting more done and spending less time waiting for files to load, images to render, and code to compile. Maybe its only 10% faster but over time that timesaving adds up.

It is true that I don’t use Mac Pro for every task. Sometimes I’m on the road (although not recently because of this virus situation) and a MacBook Pro is the only option. Sometimes iPhone or Apple Watch, or iPad Pro is the better option. But when the task requires me to sit for hours in the same position in the same room Mac Pro is the best option. Now that I have a Mac Pro I realize I was misusing my other computers. iPhones are not great for writing 70-page documents. You can do it but it’s not great.

Most of my life I felt had to go with the budget option. But I’ve always found the budget option to be barely worth it over the long run. If I keep this Mac Pro for five to ten years it will become the budget option. Otherwise, the budget option is to buy a cheap computer every 2-3 years. Over time the costs of those cheap computers start to add up to serious money.

Yes, it’s a risk to bet that Mac Pro will last for and still be relevant for five to ten years. Won’t we have quantum computers with graphene nanobots by then?

Maybe, but I (most likely) will still be using the same von Neumann type of computer in ten years that was I using ten years ago. I think most us will continue to use personal computers for work and play just as we will still need to type with our fingers and see images on a screen with our eyes.

Based on my analysis (see below) a Mac Pro gets less expensive over time as its upgrade components fall in price and the cost of a total replacement is avoided.

Mac Pro cost projection over 10 years vs. custom built PC and Dell

In the pass I’ve found I’ve needed a new computer every two years. Why? The applications I use get more sophisticated, the components become outdated, and there are security flaws that need to be addressed that the OS alone can’t fix. And sometimes the computer just freezes up or fizzles out. With Mac Pro I’m betting that instead of replacing it every two years I’ll be able to update it, as needed, and that Apple and the industry’s storage, memory, CPU, and GPU prices will continue to fall (Moore’s Law).

In 1987 I bought a Macintosh II for almost the same price that I paid for the Mac Pro in 2020. Like Mac Pro that Mac II was an expandable powerhouse. It helped launch my carrier in software development. It didn’t last me 10 years (it was not as upgradable and modular as Mac Pro) but I got a good five years out of it. It was a huge expense for me at the time but as time wore on it was completely worth it. Those were five years when I had a computer that could do anything I asked of it and take me, computationally speaking, anywhere I needed to go.

Categories
Nerd Fun

RAM Disk

Slow Processing

I’m writing a book. A “user guide” for a side project. This book is ballooning to 50+ pages. You would think that today’s modern work processors could handle 50+ pages with the CPU cores, RAM, and SSD drive space at modern desktop computer’s beck and call. That is what I thought. I was mistaken.

I started writing this book with Google Docs. After about 20 pages responsiveness became less than snappy. After about 30 pages the text insertion point (you might call it a cursor) become unaligned with the text at the end of the document.

This is not Google’s fault. Google Docs is a tour de force of HTML5 and JavaScript code that plugs into a web browsers DOM. It works amazingly well for short documents of the type that you would create in a homework or business environment. But my book is a tough cookie for Google Doc. I had subscripts and superscripts, monospaced and variable-spaced fonts. I had figures, tables, page breaks, and keep-with-next styling. In today’s modern WYSIWYG Unicode glyph word processing world it’s tough to calculate line lengths and insertion point positions the deeper into the document one goes.

So naturally I reached for my trusty copy of Microsoft Word. This is MS Word for Mac 16.35. I have been a proud owner of MS Word since the 1990 when I knew members of the Mac Word engineering team personally.

Word handled the typography of my now 60-page document without any WYSIWYG errors. But it was sweating under the heavy load of scrolling between sections, search and replace, and my crazy non-linear editing style. Word was accurate but not snappy.

I read that many writes prefer to use ancient DOS or UNIX-based computers to write their novels. Now I know why. I want the whole document loaded into memory at once. I need to fly through my document without speed-bumps or pauses as it’s chunks are loaded and unloaded from disk into RAM. But I also want typography turned on and accurate. I’m not writing a novel with only words painting the pictures in the reader’s mind. I writing a technical book about algorithms and I need to illustrate concepts that quickly become jargon salad without visual representation.

Fooling the Apps

Then a solution out the DOS and UNIX past hit me! I needed a RAM disk to accelerate Word. A RAM disk is a hard disk made not of spinning disk drive or even solid state drive but of pure volatile RAM!

There are several types of memory available to your operating system classified by how fast and reliable they are. Your CPU can access caches for popular instructions. Your apps can access physical and virtual memory for popular chunks of documents. Your operating system can access local and remote storage to load and save files. In modern computer systems tricks are used to fool apps and operating system into think that one kind of memory or storage is some other kind.

This is what a RAM disk is. It a kind of trick where the operating system mounts a volume as a normal hard disk but that volume is a temporary illusion. When you turn off your computer a RAM disk disappears like rainbow when the air dries up.

RAM disks are risky, because your computer could lose power at any moment, but they speed up applications like Word. Word was originally written in the days when memory was limited and typography was simple. Large documents could not fit into the RAM available. Word evolved to page parts of a document, that we’re not being used, in and out of memory behind the scenes, to make from for the part of the document being edited. This clever scheme made working on documents hundreds of pages long while displaying their contents with multiple styles and dynamic features like ligatures and spelling markup.

But why do I need to fool Word into thinking the disk it is running on is one kind of medium when it is another?

App Traditions

It’s been more than a decade since RAM got cheap and Unicode become standard. But most computer operating systems and applications are still written in the old paradigm of scarce memory and plentiful storage.

Most word processing mavens will tell you that hard disks are super fast these days and most computers have more RAM than they really need. And this is true, mostly. But try to get your apps to take advantage of those super fast disks and plentiful RAM! It’s not easy!

As a test I tried to use all 32 GB of RAM in my Mac mini. I loaded every app and game on drive. I loaded every large document and image. I loaded all the Microsoft, Apple, and Adobe apps. The closest I could get was 22 GB. There was this unapproachable 10 GB of RAM that I could not use. The operating system and all these app were being good collaborative citizens! They respectfully loaded and unloaded data to ensure that 10 GB was available in the case of a memory emergency. I had no way to tell these apps it was ok to be rude and pig out on RAM.

I had to fool them!

App Acceleration

To create a RAM disk in macOS you need to be familiar with UNIX and the Terminal. You don’t need to be an expert but this is not for the faint of heart. This GitHub Gist explains what you need to do. I created a 10 GB RAM disk with that unapproachable 10 GB squirreled away in my Mac Mini with the following command line:

diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://20971520 ` 

10 GB is enough to run most apps and their docs but not for the big AAA games or Xcode. 10 GB was more than fine for Word and my 60-page document.

10.75 GB RAM disk with MS Word and two docs.

The results have been amazing. Word rides my document like a Tesla Roadster as I jump around editing bits and bytes in my non-linear, unpredictable fashion.

After each editing session I just drag my documents to a safe location on my hard disk. I almost never need to reboot or turn off my Mac Mini. macOS Catalina has been rock solid for me. I’ve not lost any work and the RAM disk just hangs around on my desktop like a regular disk.

When I get around to it, I will write a script to create and load up the RAM disk and save the work with a short cut. This setup has been so stable that I’m not any hurry.

Now I want to test a Mac that with hundreds of GB of RAM. An iMac can be loaded up with 128 GB! A Mac Pro can handle up to 1.5 TB! A RAM disk might be a much bigger performance improvement than an SSD drive or a fast CPU with a dozen cores. And GPUs are not much help in processing text or numbers or even slides!

Categories
Uncategorized

Virus and Science

Like many, my life has been disrupted by this virus. Honestly, I don’t want to even acknowledge this virus. The only virtue of the Coronavirus is that should be widely apparent that we, humanity, are all in the same boat and that boat is fragile.

In the The World of the Worlds, written in 1872, HG Wells wrote about a technologically advanced species invading the earth and destroying its native inhabitants. No forces the earthlings could muster could stop the aliens and their machines. In the final hour, when all hope for the Earth was lost, the “Martians—dead!—slain by the putrefactive and disease bacteria against which their systems were unprepared; slain as the red weed was being slain; slain, after all man’s devices had failed, by the humblest things that God, in his wisdom, has put upon this earth.”

I just want to note that in the world of today we are the Martians. We are technologically advanced, bent on remaking the world, and yet somehow unprepared for the task.

I believe we are unprepared because our political, business, and cultural systems have not kept up with the advances of technical change. I do not believe we should go back to living like hunter-gatherers or the Amish (even the Amish get vaccinated these days). I do believe we should take a breath and catch up with our creations.

The Coronavirus was not created by technology (in spite of the conspiracy theories). Mother Nature is just doing what she always does, evolving her children and looking for opportunities for genetic code to succeed. This is evolution in action and we see it with antibiotic resistant bacteria and the rise of insulin resistance in modern humans. One is caused by how quickly microorganism evolve and the other is caused by how slowly macro-organisms evolve.

We have the science and technology to handle pandemics as well as antibiotic resistance and all the rest, but we have to listen to scientists and doctors. I know that sometimes, science and medicine seems to go against common sense, contradict long and deeply held personal beliefs, and has a habit of changing as new data comes in. This makes science and medicine vulnerable to ridicule, misuse, and misunderstanding.

If we start listening to scientists and doctors, instead of second guessing and villainizing them, species-level problems like pandemics, antibiotic resistance, and global warming will not go away, but we will be able to flatten their curves. If we don’t stop acting like science is just one of many sources of truth, even through we are mighty Martians, we will be felled under the weight of our own ignorance.

In The Age of Louis XIV Will and Ariel Durant wrote about the rise of science from 1648 to 1715, “Slowly the mood of Europe, for better or worse, was changing from supernaturalism to secularism, from the hopes of heaven and fears of hell to plans for the enlargement of knowledge and the improvement of human life.”

Are we stuck in the 17th century or can we move on and accept that we’re living in the 21st?

Categories
Management & Leadership

No Modes

Larry Tesler died this week. He was one of my idols at Apple Computer in the 1990s. A brilliant thought leader and champion of the idea that modes are a bad user experience.

A mode is a context for getting work (or play) done. In the early days of computers, before graphical user interfaces, applications were broken into “operational modes” such as edit, navigate, and WYSIWYG. Key commands would perform different actions in different modes. To be a great computer user, you had to memorize all the modes and all the corresponding key sequences. Modality made software easier to write but made computers harder to learn and use.

Larry Tesler was a visionary who focused on making the software do the hard work and not the user. The Apple Lisa, Macintosh, and Newton were great examples of modeless computing—as was Microsoft Windows.

Some folks, developers like me, will tell you that modal software is better. Once you get over the hurtle of memorizing the modes and commands, your fingers never have to leave the keyboard. And with modal software, as they will enthusiastically explain, you can easily perform power user operations like repeating commands and global pattern matching. I think this is true for software developers and maybe true as well for lawyers or novelists. Modal tools like Emacs and Vim make big file tasks fast and simple.

The alternative to modal software for large document management is something like MS Word. Many users think MS Word is bloated and slow. Given all that MS Word does modelessly it’s is a speedy racer! Most of us don’t need the power of MS Word (or Emacs or Vim) everyday.

You can thank Larry Tesler for championing the idea that modes are not required for most users most of the time. Thus you can grab your phone and just start typing out a message and get a call without saving your message. After the call is complete you can go back to your typing. If you want you can multitask typing and talking at the same time (hopefully you are not driving).

Behind the scenes your phone is doing an intricate dance to enable this apparent modelessness. The message app is suspended and the message is saved just encase the app crashes. The call app comes to the front and takes over the screen. During the call you can return to the message app while the call is running in the background. Other apps suspend to make room for the message app and call app to operate at the same time. Before Larry Tesler it was not uncommon for the user to have to do all this coordination manually.

To enable modeless software, applications have to share resources and the operating system has to help apps know when and what to do. In the old days this was called “event driven multitasking”. Now it’s just called software development.

How did Larry accomplish all this? Well, he wasn’t alone. But he worked hard, advocating for the user at Apple even when the cost of modeless software drove up costs. He even had a few minutes to spend with a junior employee like me. He wanted to make sure I understood the value of a great user experience. And it worked! I supported OpenDoc, the ultimate modeless user experience, and I made sure we had a version of ClarisWorks based on it. But alas the Macintosh (or PC) computers of the mid 1990s just could not handle the complexity of OpenDoc and it never shipped.

Still, to this day, I am grateful to Larry and the whole Apple Computer experience. It is the ground upon which I stand.

Categories
Uncategorized

XML and Immortal Docments

I just read Jeff Haung’s A Manifesto for Preserving Content on the Web. He made some good suggestions (seven of them) to help keep web content available as technical progress works hard to erase everything digital that has gone before.

I don’t know if everything published to the web deserves to be saved but much of it does and it’s a shame that we don’t have some industry standard way to preserve old websites. Jeff notes the that Wayback Machine and Archive.org preserve some content but are subject to the same dilemma as the rest of web–eventually every tech dies of it’s native form of link rot.

For longer than I care to admit (11 years!), I’ve been posting my own thoughts to my own WordPress instance. But one day WordPress or me will depart this node of existence. I’m considering migration to a hosted solution and something like Jekyll. That may well postpone the problem but not solve it. I could archive my words on a CD of some sort. But will my decedents be able to parse WordPress or Jekyll or any contemporary file format?

While I like the idea of printing PDFs to stone tablets from a perversity stand point what is really needed is a good articulation of the problem and a crowd sourced, open source solution.

Jeff’s first suggestion is pretty good: “return to vanilla HTML/CSS.” But what is version of HTML/CSS is vanilla? The original version? The current version? Tomorrow’s version? That is the problem with living tech! It keeps evolving!

I would like to suggest XML 1.1. It’s not perfect but its stable (i.e. pretty dead, unlikely to change), most web documents can be translated into it, and most importantly we have it already.

I know that XML is complex and wordy. I would not recommend XML for your web app’s config file format or build system’s make file. But as an archiving format I think XML would be pretty good.

If all our dev tools, from IDEs to blog editors, dumped an archive version of our output as XML, future archaeologists could easily figure out how to resurrect our digital expressions.

As an added bonus, an archive standard based on XML would help services like Wayback Machine and archive.org do their jobs more easily.

Even better, it would be cool if we all chip in to create a global XML digital archive. An Esperanto for the the divergent digital world! We could keep diverging our tech with a clear conscious and this archive would be the place for web browsers and search engines to hunt for the ghosts of dead links.

Now there are all sorts of problems with this idea. Problems of veracity and fidelity. Problems of spam and abuse. We would have to make the archive uninteresting to opportunists and accept some limitations. A good way to solve these type of problems is to limit the archive to text-only written in some dead language, like Latin, where it would would be too much effort to abuse (or that abuse would rise to the level of fine art).

What about the visual and audio? Well, it could be described. Just like we (are supposed to do) for accessibility. The descriptions could be generated by machine learning (or people, I’m not prejudiced against humans). It just has to be done on the fly without out human initiation or intervention.

Perfect! Now, everything time I release an app, blog post, or video clip, an annotated text description written in Latin and structured in XML is automagically archived in the permanent collection of human output.

Categories
Uncategorized

Mac Terminal App and Special Key Mapping

Mapping key presses in Apple’s macOS Terminal app

For fun I like to write command line applications in C using VIM. It’s like rolling back the calendar to a golden age before mice and OOP ruined everything. The discipline of writing and debugging a C99 program without a modern IDE’s firehose of autocompletion suggestions is like zen meditation for me. I have to be totally focused, totally present to get anything to compile!

Apple’s Terminal app is fine. There are other options, many of them awesome, but as part of this painstakingly minimal approach I just want to stick with vanilla. Even my vim.rc file is vanilla.

So far I’ve only run into one super annoying problem with Terminal and processing key presses with C99’s ssize_t read(int fildes, void *buf, size_t nbytes)!

Apple’s Terminal doesn’t send me some of the special keys by default. Specifically <PAGE UP> and <PAGE DOWN>. And I am betting that other, like <HOME> and <END> may have been overridden as well.

I need <PAGE UP> and <PAGE DOWN> to send read() the ASCII codes "<esc>[5~" and "<esc>[6~" respectively so I can pretend it’s 1983! (The original Macintosh was put on sale to the public in 1984 and after that it’s been all mice and OOP).

But there is a cure for Terminal!

Under the Terminal menu choose Preferences and click the Keyboard tab for the profile you are going use as a pre-GUI app shell. Press the tiny + button to “add key setting”. Select your special key from the key popup and make sure modifier is set to “none” and action is set to “send text”.

If you want tom map <PAGE UP> to its historically accurate function click into the input field and hit the <ESC> key. Terminal will populate the input field an octal escape code (\033).

So far this has been the hardest part of the exercise and why I wrote this blog post for posterity. If you need to remap key codes you probably know that <ESC> is \o33. You might think the letter o is the number 0 but then you have bigger problems if you are writing a C99 program like me.

Anyway, the rest of this exercise just involves normal typing of keys that turn into letters in the expected way!

Making <PAGE UP> send <esc>[5~ to STDIN_FILENO

This is all just bad behavior for so many reasons. What makes the Terminal beautiful it that works with ASCII codes that are both Integer values, Character values, and key codes. If you were to Click Now, you’d have a first hand information from graphic designers and coders on how these terminals can be made eye-appealing. These ASCII code describe the data and the rendering of the data on the screen. If Apple, anybody’s, Terminal app diverts a key code so that that read() can’t read it–well it’s a web browser that doesn’t conform to HTML standards.

You might be thinking: “Who cares about a terminal app in this age of 5G, Mixed Reality, Machine Learning, Cloud-based, Retina Displays?”

Under all our modern fancy toys are command line apps access through terminals. Your web server, your compliers, and your operating systems, are all administered through command lines.

For decades enterprise companies, including Microsoft, have tried to make the command line and ASCII terminals obsolete. They created GUI control panels and web-based admin dashboards. But you know what? They are all failures of various magnitudes–slow, incomplete, and harder to use than typing commands into an interactive ASCII terminal. Especially during a crisis, when the servers are down, and software is crashing, and the OS is hung.

OK, back to work on my C code. I bet I could run over one million instances of this ASCII terminal app on my off-the-shelf Mac Mini!

Categories
Agile Principles

Introduction to Scrum and Management (Part 6 of 6)

This is the part I wrote first. All the other parts were written to justify this coldhearted analysis on what should be the role of management in Scrum. I was convinced that there had to be something more for management to do than “support the team and get out of the way.”

Over the years, managers of all stripes, engineering managers, product managers, project managers, manager managers have complained to me, usually as a stage-whispered aside, that “agile is dead” or “scrum is not agile.” Their frustration seemed to come from several places: the lack of promised accelerated productivity, the lack of visibility (other than the sphinxlike story point’s slow burndown), and complicated answers to simple Waterfall milestone status questions.

We managers, of all flavors, have layered on a whole superstructure of improvements on top of Scrum in our quest for certainty in an uncertain world. But let’s look ourselves in the selfie: Have these improvements worked? Have we improved Scrum? Have we delivered more certainty than what Scrum originally promised? No.

Working through the Computer Science foundations of Scrum, the data structures and algorithms, I realized that all these improvements to Scrum brought about by managers like me haven’t improved Scrum but obscured a scientific model of work under a fog of superstition, old husband tales, and best practices.

So, now, after all this, what really is the role of Management in Scrum?

Scrum is system and humans are its parts

Scrum System Design

First, a quick summary of parts 1, 2, 3, 4, and 5

  • I read a book on Scrum by the inventor and co-creator of Scrum and his son
  • I read this book because while I’ve been supporting Scrum for more than a decade, I kept hearing about how Agile is dead and Scrum is not Agile.
  • I realized two insights from a close reading of the book: managers have no formal role in Scrum (autonomous teams don’t need managers) and there is a hardcore computational basis for the many of the processes that people follow in Scrum.
  • I further realized that if you don’t treat these data structures and algorithms for what they are, you don’t get the productivity and team happiness benefits of Scrum.

I bet, as an experienced scrum master, you already knew all this. But most of the management folks I run with don’t think of Scrum as a computational system. We managers tend to see Scrum as a set of new best practices for project management. This is a little like seeing Astronomy as a new a better way to cast horoscopes for Astrology.

Scrum, at its heart, is a computational system that creates a human-based machine. Scrum uses this human-based machineto accelerate productivity by removing waste from the work process. The secret of Scrum is in the constraints it puts around inefficiencies but not around creativity. The beauty of Scrum is in its economy of design. This design enables Scrum to apply to a wide range of work problems (not just software development). A side effect of Scrum is that the human-machine manages itself and its moving parts (team members) are happier than they are in with a traditional manager managed process.

If Jeff Sutherland, like Jeff Bezos, had built a private platform out of Scrum instead of a public framework, he would be rocketing people to Mars and tooling around on his billion-dollar yacht.

Treat people like machines

OK, fellow managers, here is my advice (caveat emptor)

First, leave Scrum alone. Don’t fix it. Don’t do pre-work outside of the Sprint. Don’t tell the Sprint team or the Scrum master what to do or how to do it. Let the Scrum process fix itself over time.

Second, fix the problems outside of Scrum with formal computation systems (human machines) for those folks left out of the Scrum process. Translate your work into data structures and algorithms and eliminate waste. Don’t worry about whether the computation will be performed by silicon or carbon.

Scrum does an excellent job of work-as-computation at high efficiency. It does this by creating formal roles for the people who Sprint and ensuring that all work is filtered for priority and done with in a predictable, repeatable, time-boxed process.

BTW, this process of treating people like machines is nothing new!

The first computers were not made of silicon and software. They were people. For thousands of years people were doing the computing that enabled empires to trade, businesses to serve customers, and NASA to send rockets to the moon. Only within my lifetime have we delegated computation to non-humans.

I sense your eyebrows rising sharply! Managers who treat people like machines are inhumane.

And you are right. If we don’t follow Scrum’s model of how to compute well with people, then we managers are the living incarnation of Dilbert’s pointy-haired boss. We are micromanagers who make buzzwords out of useful tools like Agile, Scrum and DevOps. But if we don’t treat our people like machines what are we treating them like? Resources? Head counts? Soft capital?

So, if you think about it, as a manager, you pretty much treat your people like machines at some level. You give them tasks, expect them to ask relevant questions, and then to do the task to your specifications by the due date. You expect high-functioning employees to work well with vague input and all the rest to require SMART input. You don’t expect the employee’s feelings to impact the work. You are not a monster, but you have a business to run.

It is interesting to note that the people-treated-like-machines who follow a Scrum practice are far happier than their beleaguered and belabored non-Scrum counter parts Why is that?

Formal (systems) beats casual (anything)

I know we live in an age of the casual work environment. Dress codes are relaxed, hours are flexible, and hierarchies, while still in use, have been hidden away like ugly relics of a less enlightened age. But only the outside of the workplace is casual. On the inside our workplaces are just as formal as they have always been. I believe the patina of unscripted, casual interaction has made the workplace hard to navigate and an unhappier place.

Let’s contrast the formalism of Scrum with the casualism of the rest of the office:

ScrumNon-Scrum
WorkloadPrioritized backlog (sorted queue) locked during the Sprint.
 
Lee just sent a high priority email. Scrum master will take care of it for me!
Multiple uncoordinated sources that can change at any time. 
 
Lee just sent a high priority email. Should I drop everything to work on it?
WorkdayDefined by the sprint as a loop of predictable duration, where the team commits to a specific number of story points and a daily check-in meeting.
 
I can completely focus on my stories and if I get blocked the scrum master will unblock meI only have one meeting a day, so I don’t have to rudely work on my laptop during that meeting.
Multiple uncoordinated open ended workstreams with soft deadlines that demand multitasking.
 
I can’t focus completely on Lee’s request so it’s going to take days instead of an hour or two. I have so many meetings that I have to work on my laptop during each! I should also work during lunch and stay late but I’m feeling low energy and the kids need help with their home work.
Work unitStory point: a well described task with a set business priority and expected labor value such that worker knows if they are spending too much or too little time.
 
I tested, documented, and committed my code. My teams are doing a code review and will get back to me with feedback shortly. I know for myself that my work is on track, so I’ll start on my next story.
An email, a document, a presentation, a spread sheet, a list with no definition of done or labor value.
 
I sent Lee a deck, but I had to bump my other work to complete it. Is it finished? Should we meet to review it? Will my boss get a call from an angry department head because of all the bumping?
Work teamProduct owner, scrum master, and a specific set of developers. Nobody else is on the team.
 
I know exactly who is working with me on this project. Lee is the EVP of XYZ but I don’t have to worry about that. The Scrum master will take care of it.
Probably the people on the email you just got. 
 
Is Lee working on this project of is Lee a stakeholder?  Even Lee isn’t sure so to be safe just CC Lee on everything! The RACI is always out of date!

We can easily see why the members of a Scrum are happier than the members of a Non-Scrum. Formalism brings clear boundaries so that employees know what they are doing, how well they are doing, and when they are finished. Non-Scrum team members might work all night on a project and find they failed because they didn’t work with the right info, or the right people, or the right priority. This kind of work-tragedy brings tears of frustration to the most experienced and valuable employees and leads to cynicism and other productivity busters that we managers are supposed to be managing out of the organization!

Because Scrum embraces and thrives on change the RACI is never out of date! Inside the sprint the priorities, the work to do, the due dates, the team members, and the estimated labor values do not change! Outside the sprint management brings everything the team has to do up to date. As a manager who prides himself on closing and finishing, I love the elegant efficiency of Scrum. I don’t know how other managers in other departments cope without Scrum.

We managers need not to fix Scrum but to fix ourselves. The dev team has become super effective. We, engineering management, product management, project management, and all the other managements need to catch up. We need formal systems of our own, similar to Scrum in the sense that they use data structures and algorithms to eliminate waste and accelerate work. 

Categories
Agile Principles

Introduction to Scrum and Management (Part 5 of 6)

Pavley.com presents, the penultimate episode of ITSAM! Starring the algorithms of Scrum. The computational thinking that makes it possible to do “twice the work in half the time.”

Last episode, part 4, starred the story point as a data structure of enumerated values and its function as a signal of complexity. Story points are expressed as Fibonacci numbers, ratios of intuitively accelerating magnitude. The humble but nuanced story point is like the pitch of the teeth in the gear that runs your sprint iteration: The finer the pitch (smaller the story point values) the faster your productivity flywheel turns.

In this episode we turn away from story points and take a step back to discuss four unambiguously defined recipes that precisely describe a sequence of operations that drive the Scrum process. Scrum is often visualized as a set of nested loops and we’re going to do the same. These loops take an input state, the backlog, and transform it by iterations, into an output state, working software.

Ah, but there is a catch! People are not machines. We tend to mess with the sequence and order of Scrum operations and derail the efficiency of its algorithms and then wonder why “Agile is dead.”

The algorithms of Scrum

What an algorithm is and is not is critical to understanding how to Scrum. Get it right and the Scrum fly wheel spins faster and faster. Get it wrong and the Scrum fly wheel wobbles and shakes, eventually flying off of its axis.

At the surface, almost any well-defined and repeatable process is an algorithm. Counting on your fingers, singing Baby Shark, and the spelling rule i before e except after c are more or less algorithms. To be a true computational algorithm all variation has to be nailed down. If human judgement is required in implementing an algorithm, as in knowing the random exceptions to the i before c rule, the algorithm isn’t reliable or provable. 

Jeff and JJ Sutherland, in their book Scrum: The Art of Twice the Work in Half the Time, don’t mention algorithms. Probably because what I’m calling algorithms don’t strictly fit the Wikipedia definition. But I believe if we refine these processes as close to true computation as we can get, Scrum works well. I believe it because I’ve seen it! So, let’s take a quick survey of each core algorithm in turn–we’re looping already.

The sprint (outer loop)

The outer loop of Scrum is the sprint. It’s a relatively simple Algorithm.

// pseudo-code implementation of a sprint loop
while value of epic count doesn’t yet equal 0 {
  play planning poker() with highest-priority epic
  for each work day in sprint duration {
  	standup() with sprint backlog for 15 mins
  }
  demo()
  if demo is not accepted {
    throw sprint broken error()
  }
  retrospective()
}

I like the idea of the sprint as algorithm because there isn’t a lot of room for human creativity. But there are a few hidden constraints!

  • Scrum doesn’t want you to rest or waste time between sprints. Start the next sprint on the next working day.
  • Scrum wants the whole team participating in the sprint.
  • Scrum doesn’t want you to start new a sprint before the last one has completed.
  • Most importantly: Scrum wants all development activities to take place inside the sprint. This constraint creates a huge headache for product management, UX design, and QA as they are commonly practiced.

One reason Agile is dead and Scrum’s hair is on fire is that anything that happens outside the sprint is not Scrum, does not go fast, and creates terrible stories. 

For example, designing all your screens upfront with focus groups is not Scrum. Manually testing all your code after the demo is not Scrum. Skipping the demo, adding more engineers during the sprint, or asking engineers to work harder is not Scrum. The sprint loop with its constraints works really well if you don’t do any work outside the sprint!

Planning poker (pre-condition)

The first thing a Scrum team does on the first working day of a sprint is to plan. The core of that meeting is the planning poker algorithm. It takes patience and practice to get right.

// pseudo-code implementation of planning poker
while consensus is not true {
  product ower explains story
  team asks clarifying questions
  for each developer in sprint team {
    compare story to previously developed story
    estimate work using story point value
    present estimate to team
  }
  if story points match {
    set consensus to true // breaks the loop
  }
}

The goal is to transform an epic into a prioritized backlog for the sprint. That means to break a vague unworkable narrative into a specific, measurable, achievable, realistic, and time-bound (SMART) story—and discovering new stories in the process. The result of planning poker is pre-condition, a state to which the backlog needs to conform, to enable a successful sprint.

In many Agile processes an epic is sometimes groomed or broken into stories before the sprint. It’s an honest attempt to get ahead of the game. But, honestly, breaking down an epic without the team playing planning poker means you get all the bad qualities of Waterfall–the qualities that Scrum was created to avoid.

Daily standup (inner loop)

Have you ever been stuck in a status meeting with no ending in sight and most of the participants in the room paying attention to their phones and not the person speaking? The daily standup algorithm was created to banish the status meeting from the realms of humankind.

// pseudo-code implementation of daily standup
accomplishments = List()
today's work = List()
impediments = List()
timer() start for 15 minutes
  for each developer in sprint team {
    announce() accomplishments, append to team accomplishments list
    announce() today’s work, append to team today’s work list
    announce() impediments, append to team impediment list
  }
  if timer() rings {
    throw standup duration error()
  }
}
timer() stop

I personally think this algorithm works for all types of work, not just development. Without a strict, formal model to follow, status meetings become planning meetings, brainstorming meetings, complaint sessions, political battle grounds, ad in finitum.

High performing Scrum teams hardly ever drift from the classic daily status formula as described by Jeff Sutherland. Unfortunately, I’ve seen struggling teams given into temptation and turn a good daily standup into a bad trouble shooting meeting. Don’t do it! Go around the room, check the boxes, and follow up with a cool head after all the accomplishments, today’s work, and impediments have been collected (so you know to start with the most urgent issues).

Retrospective (post-condition)

I have to admit that the retrospective is my favorite part of the sprint process. If you do it well and stick to the algorithm a poor performing Scrum process naturally evolves into a high performing Scrum process.

// pseudo-code implementation of daily standup
keep doing = List()
stop doing = List()
change = List()
for each member in sprint team {
  // includes product owner, devs, any other core team members
  announce() what went well, append to the keep doing list
  announce() what didn’t go, append to the stop doing list
  announce() what needs to change, append to change list
}

Like the daily stand up it takes a surprising amount of resolve to stick to the plan and not turn the retrospective into a war crimes trial or a cheerleading exercise. Oddly, the other major problem with the retrospective is lack of follow-up! We get these great lists of things to repeat, to stop repeating, and to change but many times they go nowhere.

It’s important to drive the items on each list into SMART territory so that a manager can do something about them. Noting that “the backlog was not well groomed” or “the stories needed more refinement” just isn’t enough signal to result in a meaningful change. And, of course, there are issues that can’t or won’t change. They have to be worked around.

While the retrospective is very much like a computational algorithm your response to its findings has to be creative and bold. After every retrospective I expect a scrum master to barge into my office, interrupt whatever I’m doing, and hand me a list of what must change. It’s the one output of the Scrum process that an engineering manager can participate in and it doesn’t happen often enough!

As our heroes, the algorithms of Scrum, walk arm-in-arm into the sunset let’s review the basic tenet of we what learned: The more you treat the sprint, planning poker, the daily standup, and the retrospective like the gears in a clockwork engine, the faster that engine runs. There is plenty of room outside of these algorithms but resist the temptation to add value. You’ll probably be surprised at how Scrum works when you respect it and don’t try to fix it.

In our next and final installment of ITSAM I’m going to actually talk about management: If a manager’s job doesn’t involve giving orders, taking temperatures, and holding people accountable–what is her job? Why do we even need managers if we have Scrum?

Categories
Agile Principles

Introduction to Scrum and Management (Part 4 of 5 or 6)

Our story so far: in part 3 I described the Scrum team as a data structure—an undirected graph. I tried to show how the properties of an undirected graph predict how a Scrum team behaves and how it can be optimized for productive behavior. Part of that optimization is keeping teams small, eliminating hubs, and breaking the sprint if anything doesn’t go as planned. Undirected graphs are harsh but if we respect them, they will reward us.

Today we’re looking at the third major data structure of Scrum: the story point. OMG! Let me just say that story points are the most powerful and most misunderstood idea in Scrum. Because story points are expressed as integers, its hard for even for experience engineering managers like me not to mistake them for integers.

The story point

This series of blog posts has become for me, my own A Song of Ice and Fire. Author George R.R. Martin originally estimated that he was writing a trilogy. But as Martin started writing, the series became six and now seven books. Honestly, I don’t trust Martin’s estimate of seven books. Given how popular “Game of Thrones” has become, if Martin lives forever, I expect he will be writing ASOIAF books forever.

When I started out writing an Introduction to Scrum and Management, I took my detailed notes from reading Jeff and JJ Sutherland’s book Scrum: The Art of Twice the Work in Half the Time and estimated I could express myself in three blog posts, maybe four just to be on the safe side. I need to time box my projects as spending too much time on any one project steals valuable time from others. As you can see from the subtitle of this post (Part 4 of 5 or 6) my estimate of the number of parts continues to increase. My project is over budget and the final post is delayed!

Jeff Sutherland, a good engineering manager, knows that people are terrible at estimating effort. Sutherland knows that less than one third of all projects are completed on time and on budget. He also knows that there are many reasons for this (poor work habits, under- or over-resourced teams, impediments that never get addressed) but the root cause is our inability to estimate timing (unless we have done the task before and have transformed it into a repeatable process).

The problem with writing fantasy novels and software is that they are not repeatable processes.

This is why Sutherland invented story points and George RR Martin still write his novels with WordStar running on a DOS PC. Since Sutherland and Martin cannot control the creative process, they put constraints around it.

The story point was invented by Jeff Sutherland because human beings really can’t distinguish between a 4 and 5. Jeff was looking for a sequence of numbers where the difference between each value was intuitive. Jeff realized that the Fibonacci numbers, a series of numbers that are known as the Golden Ratio, were the perfect candidates to do the job of estimating work. Art lovers, architects, mathematicians, and scientists, all agree that the world around us is built upon a foundation of Fibonacci numbers.

I could muse for endless paragraphs on how Fibonacci numbers are so elegant that they enable artists and artichokes alike to create beautiful compositions. But let’s just take Fibonacci numbers for granted and see how they are used to implement story points.

Here are the first eight Fibonacci numbers. It is easy to see that as the numbers increase in value the difference between each number increases. This acceleration in difference is in harmony with our ability to detect fine differences at a small scale but not a large scale.

1, 1, 2, 3, 5, 8, 13, 21

Each number in the sequence is the sum of the pair of numbers that immediately proceed it. You can do the math if you don’t want to take my word for it!

A diagram of Fibonacci squares shows the magnitude of Fibonacci progression nicely.

But let’s back up a bit. Why do we need Fibonacci numbers? We’re developing software not paintings or artichokes!

In Scrum a story is a simple description of a chunk of work to do. A sprint is a repeating and limited duration of time in which to do work. Since the work to be done is creative, it can’t fully be understood until the worker is doing it. Thus Scrum constrains process of doing the work but not the work itself.

In summary

  • Stories constrain the definition of work
  • Sprints constrain the time allotted to work
  • Story points constrain the amount of work based on a story that is planned to be executed during a sprint.

If you have done something before, and absolutely nothing has changed, then you don’t need story points. But almost all software development projects involve new requirements, new technologies, and new techniques. When planning a software development project, the big problem is where to start. It’s hard to know how to break down a big project into nicely workable chunks.

Story points get the developers and product owner talking about where to start and how to break the problem down. In discussion during the sprint planning meeting, 13-point stories are broken into several 8-point stories. 8-point stories are broken down into many 5-pointers. And so on until all that is left are dozens if not hundreds of 1-point stories (which are, by their nature, very well understood stories).

Scrum masters and engineering managers know that a 13-point story isn’t dividable into one 5-pointer and one 8-pointer! A backlog of story points is not communicative, associative, or distributive like the ordinary numbers we grew up with. Story points can’t be added, subtracted, multiplied or divided together.

We also know that one team’s 13-point story is another team’s 21-point story. Story points are relative to the team, they change in value as the team gets better (or worse), and are not comparable unless the same people have worked together on the same project for hundreds of sprints.

As a data structure the enumerated values of story points are a wonderful set of flags, where the different between each flag is intuitive. Story points are signals not units.

Alright, this blog post was a bit long but in my defense story points are a nuanced concept. I think we’re just about at the end–which should be a relief to all of us. The good news is that my ability to estimate has significantly improved by doing the work. In the next blog post I’m going to talk about the Algorithms of Scrum.

Next, the penultimate episode, part 5