Categories
Uncategorized

Swift Property Wrappers

A property wrapper in Swift is a simple and clean way to mix behaviors with properties via syntactic sugar. Simple means that to wrap a property you just declare it with an @ sign and the name of wrapper. Clean means that all the code for the injected behavior lives in one place. You could do this all with functions but property wrappers require less cognitive juggling.

Let’s walk through an example of using a property wrapper to mix constraints with a String property so that we can use it represent a Tic Tac Toe game board.

Here are some well known rules of Tic Tac Toe which we want any representation to conform with:

  • The board has 9 squares, no more, no less
  • The only marks a player can make on a board are noughts and crosses.

A Do-Nothing Implementation

Let’s start with a simple property wrapper that represents a Tic Tac Toe board as a String with no rules enforced…

@propertyWrapper
 struct TicTacToeBoard {
     private var state = ""
     
     var wrappedValue: String {
         get { return state }
         set { state = newValue }
     }
     
     init() {
         state = ""
     }
 }

This code handles getting, setting, and initialization of a wrapped property. Getting, Setting, and initialization are the moments in the wrapped property’s life cycle where we will want to mix-in constraints to ensure the TicTacToeBoard conforms to our rules.

To wrap a property with this wrapper we annotate it with the name of the wrapper inside a struct or class…

struct TicTacToeGame {
     @TicTacToeBoard public var gameState: String
 }

(Obviously in a real Tic Tac Toe game we would have many more properties.)

To test our wrapper we can instantiate a game object and see is we can write to and read from the game state property…

var game = TicTacToeGame()
game.gameState = "_"
print(game.gameState)
// output: _

Excellent! We’re at a good starting point. Our wrapper is working but doing nothing. It’s time to mix-in some behavior with the game state property.

Constraint 1

Let’s start by ensuring the length of a Tic Tac Toe game board, as memorialized in the game state property, is always nine squares long.

(I know that in the real world a Tic Tac Toe game board is a 3 x 3 square but from a state point of view that is an implementation detail.)

@propertyWrapper
struct TicTacToeBoard0 {
  private var state = ""
  private var length = 9
 	 	  	 
  var wrappedValue: String {
  get { // ... }
  set {
    // Constraint #1
    // ensure that the length of the newValue is not too 
    // long or too short
    if newValue.count > length {
	  	  	  	  	  	        
    // Constraint #1a	  	  	  	  	  	  
    // truncate a board that is too long
    state = newValue.prefix(length) + "" 
       // the + "" make the Substring a String
    } else {
 	  	  	  	  	  	  	  	  // Constraint #1b
 	  	  	  	  	  	  	  	  // padd a board that is too short
 	  	  	  	  	  	  	  	  state = newValue + String(repeating: "_", count: length - newValue.count)
 	  	  	  	  	  	  	  	  // cound not use String.padding() because of SIGBART
 	  	  	  	  	  	  }
 	  	  	  	  }
 	  	  }
 	 	  	 
 	  	  init() { //... }
 }

We’ve expanded the set clause to check for a state string that is too long or too short. If the state string is too long (newValue.count > length) then we truncate it. We just throw the characters beyond index 8 away. if the state string is too short (newValue.count < length) we pad the right end with underscore characters. We’re making some harsh calls here, throwing away data, which we will deal with later on.

Note that I had to do some tricks to get this simple code to work. Swift has come a long way in making strings, substrings, and characters interoperable but I still had to add an empty string to coerce the result of prefix() into a string. I also could not use the padding() method to pad the state string because doing so resulted in a crash. Maybe these are bugs or maybe I’m not reading deeply enough into Apple’s String class documentation.

When we run our updated code the result is a game board that is always exactly 9 squares in length…

var game = TicTacToeGame()
game.gameState = "_"
print(game.gameState)
// output: _________

Constraint 2

Now let’s ensure that a game board only contains legal characters. In the case of Tic Tac Toe we’ll use the following characters as symbols:

  • “_” for empty square
  • “o” for nought
  • “x” for cross

Now that we’ve defined how our game universe is symbolized it’s time to put these definitions into code..

@propertyWrapper
 struct TicTacToeBoard {
 	  	  private var state = ""
 	  	  private var length = 9
 	  	  private var legalChararacters = "_ox"
 	 	  	 
 	  	  var wrappedValue: String {
 	  	  	  	  get { return state }
 	  	  	  	  set { 	 	  	  	  	  	  	 
 	  	  	  	  	  	  // Constraint #1
 	  	  	  	  	  	  // ...

 	  	  	  	  	  	  // Constraint #2
 	  	  	  	  	  	  // ensure that the newValue only contains legal chars
 	  	  	  	  	  	  let legalizedState = state.map { legalChararacters.contains($0) ? $0 : "_" }
 	  	  	  	  	  	  state = String(legalizedState)
 	  	  	  	  }
 	  	  }
 	 	  	 
 	  	  init() { ... }
 }

For the purposes of this demo we don’t have to define the meaning of each symbol. We just have to let the game know that “_”, “o”, and “x” are the only legal characters.

We’re using a map function to map any illegal characters to an empty square (“_”). Here we are again, throwing away valuable data because it doesn’t fit our model of what a Tic Tac Toe game is. (We’ll correct this problem shortly, I promise.)

Run a test and we’ll see that illegal characters are replaced and the length of the game board remains consistent…

var game = TicTacToeGame()
game.gameState = "_ox_x_z_x_"
print(game.gameState)
// output: _ox_x___x

Retaining Data

It kind of feels like we are done. We have successfully used a property wrapper to inject our model of a Tic Tac Toe game into an ordinary String object. But I’m still nervous about throw away data. It might be important to preserve the original value of the game state before we sanitize it. Future maintainers of this Tic Tac Toe repo might need to use that data to improve the game. Lucky for us Swift’s property wrapper mechanism has a property called projectedValue which we can use to retain the original game state value.

Let’s implement a projectedValue for our Tic Tac Toe game…

@propertyWrapper
 struct TicTacToeBoard {
 	  	  private var state = ""
 	  	  private var length = 9
 	  	  private var legalChararacters = "_ox"
 	  	  var projectedValue = ""
 	 	  	 
 	  	  var wrappedValue: String {
 	  	  	  	  get { return state }
 	  	  	  	  set {
 	  	  	  	  	  	  // Save the original newValue as the projectedValue
 	  	  	  	  	  	  projectedValue = newValue
 	 	  	  	  	  	  	 
 	  	  	  	  	  	  // Constraint #1
 	  	  	  	  	  	  // ensure that the length of the newValue is not too long or too short
 	  	  	  	  	  	  if newValue.count > length {
 	  	  	  	  	  	  	  	  // Constraint #1a
 	  	  	  	  	  	  	  	  // truncate a board that is too long
 	  	  	  	  	  	  	  	  state = newValue.prefix(length) + "" // the + "" make the Substring a String
 	  	  	  	  	  	  } else {
 	  	  	  	  	  	  	  	  // Constraint #1b
 	  	  	  	  	  	  	  	  // padd a board that is too short
 	  	  	  	  	  	  	  	  state = newValue + String(repeating: "_", count: length - newValue.count)
 	  	  	  	  	  	  	  	  // cound not use String.padding() because of SIGBART
 	  	  	  	  	  	  }
 	  	  	  	  	  	  // Constraint #2
 	  	  	  	  	  	  // ensure that the newValue only contains legal chars
 	  	  	  	  	  	  let legalizedState = state.map { legalChararacters.contains($0) ? $0 : "_" }
 	  	  	  	  	  	  state = String(legalizedState)
 	  	  	  	  }
 	  	  }
 	 	  	 
 	  	  init() {
 	  	  	  	  state = "_________"
 	  	  }
 }

All we had to do was add a var called projectedValue and then assign it newValue before we started messing around with it. To access the projectedValue you use a $ in front of the wrapped property name. Let’s update our previous test to print out both the wrapped property and the projected value…

var game = TicTacToeGame()
game.gameState = "_ox_x_z_x_"
print(game.gameState, game.$gameState)
// output: _ox_x___x _ox_x_z_x_

A projectedValue, like a wrapped value, is not persistent. If we wanted to we could log both values to a file with a IOS 8601 date-time stamp.

When to use Wrapped Values

We could have implemented the behavior of our Tic Tac Toe game board in many ways with the traditional methods and properties of a class or struct. We could have made the game board a type or composed of a set of types.

Classes and structs (types) are excellent for managing complexity. As a codebase evolves over time with many maintainers the power of types helps keep the code organized, documented, and testable. But managing complexity requires its own level of complexity. Sometimes the effort involved in maintaining and extending custom types overwhelms the code itself so that we spend more time refactoring old code than we do writing new functionality.

Property wrappers, which form the basis of powerful abstractions, like @State, @Binding, @ObservedObject and @Published, are used to weave together existing types without inheritance, extension, or composition. @TicTacToeBoard is not a new type, it’s a String type that has been decorated with additional code. Property wrappers are a great way of keeping abstractions separate and independent.

Looking back on my example, I’m not sure it is a good one for the property wrapper abstraction. @State is a good as a property wrapper because it makes any property observable. @TicTacToeBoard is not an abstraction that many other properties are going need to worry about.

Going forward I would use a property wrapper for memory management, state management, storage management, logging, and DB connections. These are all behaviors that Apple should just provide–as they have done with state management via @State. But until Apple does, if they do, you’ll want to have a library of management code injectable though property wrappers for all your properties.

Categories
Uncategorized

Learning by Doing

The Wrong Way

When I learned to code, I thought I was learning the wrong way. The 1980s were the Bronze Age of the personal computer with the Apple II, Commodore 64, Atari 800, and the TRS-80 competing for mind and market share. I had gotten it into my head that every home, school, and office would soon have a PC dutifully managing info and automating tasks like the faithful robots that Isaac Asimov had written about. I felt desperate in my need to understand the inner workings of these heralds of the digital age.

Since I was a cash-strapped college student all I could afford was the deeply discounted and distinctly weird TI-99/4A. I bought the Texas Instruments home computer because its specs were impressive (16-Bit CPU) and it cost about 100 bucks. The TI did not set the world on fire. There was very little commercial software. If I wanted to do more than play a few cartridge-based games and I would need to write my own programs.

There was a very vibrant computer hobbyist magazine market in the 80s. My favorite magazine was COMPUTE! Every issue was chock full of news, speculation, and tutorials for each of the major PC systems. I read COMPUTE! from cover to cover and typed in every TI BASIC program printed each month. Since the TI was far from popular there were few articles devoted to its flavor of BASIC. A few days after I devoured a new issue of COMPUTE! I ran out of things to do with my TI.

Trial and Error

Out of desperation and desire I did something unthinkable! I attempted to adopt the Apple, Commodore, Atari, and TRS programs for the lowly TI. At first my efforts simply failed. The BASIC programming language that all these systems shared was broadly similar, but the display, sound, and file sub systems were very different. I learned, the hard way, that what we might call backend programs today were far more portable than frontend programs. The calculation code of a financial calculator could easily be written in a kind of “universal BASIC” while the charting code had to be redesigned entirely. (We have this problem day! It’s why iOS and Android apps have to be different codebases.)  

I was learning, through trial and error, some of the basic principles of software engineering. Insights that I use in the current Golden Age of cloud computing and internet connected devices. I don’t believe I would have retained, or even truly understood, these principles if I had not attempted to “just do it.” An important part of my learning experience is that I had no idea if I would succeed. It was pure R&D.

Learning by Doing Principles

Here’s what I’ve found to be the important elements of learning by doing (from someone who is learning by doing it).

  1. Limited access to expertise: If we have a lifeline, we’re going to use it. Don’t use it. Don’t Google that error message or look for the answer on Stack Overflow. Just bang your head against the wall until something shakes loose.
  2. Unlimited access to examples: This is what is truly great about open source. We have so many examples of working code. These examples might not directly solve the problem at hand, but they provide hints and inspiration.
  3. Unlimited time: For work and play its best to timebox all our activities lest we miss deadlines or overindulge. But for learning it best to take all the time we need. Learning by doing is exploratory and self-driven. If we rush it, we risk completing the project but not the learning.
  4. Simple systems: The TI-99/4A had a single CPU that executed code slowly, in 16KB of RAM, and plotted graphics on a 256×192 bitmap. There wasn’t much there to distract the learner from her self-appointed task. Lucky for us, every UNIX-based computer has a terminal app that transforms your modern supercomputer into something like a 1980’s home computer. We can even install BASIC! (Well, don’t go that far, Python or JavaScript are fine for learning to code projects.)

The Right Way

In hindsight I realize that my mix of limited resources and unlimited time were a big part of my learning succuss. I had to improvise. I had to stretch and grow. I could not Google or Stack Overflow the answers.  

Don’t take my word for it! Learning by doing (project-based learning) is a pedagogical approved alternative to teacher-led instruction. It also looks a lot like a good Agile development process! At Apple we used to say, “you don’t really know how to write code until after you have written it.” This is why the 2nd  and 3rd  generations of every software system are big improvements. It’s not the new features, it’s the lessons learned by doing that lead to safer, faster, and reliable software.

The Only Way

The core lesson I’ve learned by writing code for 30 years is that we’re still figuring it out. While we have amassed a considerable amount of knowledge the pace of technological change continuously challenges that knowledge. When it comes to coding learning by doing isn’t the wrong way – it’s the only way.

Categories
Uncategorized

Learning from First Principles

A Troublesome Student

I was a poor student in elementary school. I was unable to focus for more than a few minutes at a time. Bored and restless I treated the classroom as a kind of one-man show to entertain myself and earn the admiration of my peers. I remember bringing our family’s fancy silverware to school one morning in 4th grade. During math class I distributed the cutlery to each student at each desk as if we were about to enjoy a meal. But there was no meal and my mother and teachers were not amused.

In this day and age, I would have been diagnosed with a “learning problem.” Most likely Attention Deficit Disorder, the infamous ADD, and perhaps medicated. Luckily, I grew up in the dark ages of the late 1960s. The diagnosis was “troublemaker” and the solution was a desk in the hallway outside of the classroom. I was given only math worksheets and a pencil for entertainment. The educational system had given up on me.

Sitting alone and isolated was one of the best things that ever happened to me as a child.

With no audience and no windows in which to daydream I opened the workbook to page one and started doing math problems. There was nothing else to do and no instructor to rebel against. 

It was slow going at first but by the end of the month I was in love with math and several chapters ahead of the class. I moved up and out from 4th grade into 5th grade math. I was rehabilitated into the inside of the classroom. Eventually, I even started to pay attention to the math teachers. 

I had learned 4th grade math from first principles.

I should note that I’m not a great mathematician. Because of my loner learning style, I don’t do math like classroom-trained people do. For example, I never bothered to memorize my times tables. Instead I write out a table of three number lines, natural, even, and odd. I can then derive any integer product, dividend, or remainder by lookup and simple arithmetic. I had created a tool to avoid route memorization.

More importantly this was my rough introduction into learning how to learn.

Learning without Guidance

Learning from first principles means learning without a guide and building up from what you already know. The most famous example of first principles are the Elements of Euclid. I’m sure you’ve heard of this ancient Greek geometer. He started with a small set of fundamental observations about the behavior of points and lines on a plane and derived hundreds of conclusions that formed the basis of geometry as we practice it today. 

During the early part of my career we had to be a bit like Euclid to figure out how to develop software, deliver applications, and build teams from first principles. Back in the 1980s and 1990s we didn’t have best practices and opinionated programming languages. We had to solve problems with code without guidance.

At Apple we did object-oriented programming before there were object-oriented programming languages. We learned from first principles that function pointers and pointers to pointers (handles) could be used to encapsulate properties and methods into objects. We didn’t invent C++ or Objective-C but when these advances came around, they felt like old friends.

At DoubleClick in the 2000s we did cloud computing without containers or virtual machines. We learned from first principles how to configure hardware and software, so that every server was an exact clone. This enabled ad serving workloads to be automatically propagated through our network so that our QoS was easily maintained in the face of failures. We didn’t invent AWS but when Amazon started marketing their excess capacity, we had a similar business plan on the table. 

The power of learning from first principles is that it empowers you to figure out the future on your own with the tools you have at hand. The best practices and opinionated programming languages if today are the first principle hacks of yesterday.

A Short Guide to Guideless Learning

So, how does one learn this way? How does one start down the path of first principles?

It starts with observation.  Start like Euclid did with a set of simple, self-evident facts that a non-expert can understand. Euclid did this by observing how two points and lines interact to create angles and shapes on a flat surface.  

It helps, in my opinion, to be a non-expert in order to make these observations. I fight hard to ignore what I “know” so I can see what is really going on.

Restrict your observations to the abstract. Euclid only recorded observations are universal regardless of the environment. He focused on how points and lines interact on an ideal surface. Euclid didn’t get caught up in specific details.

Abstraction is a superpower all of us share. Abstraction is the ability to discount the non-essential and to focus on the properties that are shared by seemingly unique entities. It’s been helpful to me to think about effects — not causes — when I’m thinking abstractly.

Build up a foundation of true definitions. Euclid recorded his observations as a handful of definitions upon which he built the empire of two-dimensional geometry. These definitions were Euclid’s building blocks in the same way that the definitions of legal terms build up a contract or the line items of expenses and credits build up a budget.

I find it helpful to record each observation about the system I’m studying on an index card. These cards become my working dictionary for the problem space. As a bonus index card are cheap, easy to order and reorder, and restrict my verbosity.

Apply simple logic to the definitions to make predictions. Euclid used his definitions to build up hundreds of propositions. These propositions were not simple and not obvious, and yet he derived them simply and obviously. When Euclid was done, he could predict the number of degrees in an angle, the length of lines, and the diameter of circles.

It helps to express predictions in diagrams and to show how the logic works in with arrows and annotation. I used to start with the diagram. But if the underlying observations, abstractions, and definitions are not rigorous then the diagram is worthless. I’ve seen these worthless diagrams on whiteboards and in business docs during my entire career.

Be a Lazy 9-Year-Old

The lazy 9-year-old version of me accidently discovered learning from first principles out of a fascination with patterns and a lack of fascination with listening to others. I’ve matured and now deeply value what others, even elementary school math teachers, have to say.

A great place to discover learning from first principles is in the classroom. The student has to be receptive. The teacher has to be empowered. Parents have to be patient. Society has to be supportive.

I still use learning from first principles to this day. It’s how I figure out how to deliver more value with less budget or scale software development with hundreds of engineers. In full disclosure I do read and value all the books and blog post on best practices and prior experiences. But I view this prior art as theory and not law. There is still a lazy 9-year-old deep inside me who isn’t content with being told. I need to find out how things work for myself.

Categories
Uncategorized

Success Means Learning

Much of my success in life I attribute not to the fortunes or misfortunes of my birth (genetics, socioeconomics) but to my ability to learn and act on what I’ve learned.

I could be totally wrong. As an experiment of one all my success could be due to luck and happenstance. I have learned that all-or-nothing explanations tend to be wrong. It’s unlikely that any answer to any complex question is due to a single factor. There is almost always a collaborator lurking in the background!

In the case of learning I have found that the ability to continuously learn is much more important to living in the 20th and 21st centuries than other highly desirable abilities. A good general learner is better able to handle change, surprise, and ambiguity than the expert or the specialist. This is my personal experience of personal growth and of managing hundreds of software engineers and managers over the last 30 years.

Here is a trivial example:

  • JP is an okay coder. Not brilliant but able to reliably get the job done with help from co-workers and stack overflow. 
  • PJ is an excellent coder. Brilliant but only really into one programming paradigm. 
  • JP has no strong opinions and thus their work is all over the place in terms of coding style and techniques.
  • PJ has many very strong opinions on “best practices” and thus their code is clear, concise, and predictable.
  • It is interesting to note that a JP story point tends to be a larger but fixed value as compared to PJ. JP takes about the same amount of time and effort to get anything done whereas PJ tends to be more variable in their output. 
  • A scrum master can sleep on the job with JP but really needs to pay attention to PJ as one never quite knows when or what PJ is going to deliver.

I bet you’ve already caught on to my attempt at fooling you! Over my 30 years of coding I’ve been both JP and PJ. I started out as a PJ and gradually I’ve learned to become a JP. The time and effort it takes to become an expert and to formulate strong opinions has led to diminishing returns. Our world just continues to spin faster with more variability than purists have time to master.

Who would you rather have on your team, JP or PJ? Who would you rather manage? Who would you rather be?

We need both JPs and PJs. I’ve worked at startups and large enterprises where we have tried to hire only one type and that has led to disaster. JPs make few “clever hacks” while PJs hit few milestones. A team of mostly JPs with some PJs seems to be ideal.

The main difference between JP and PJ is how they learn and how they use what they learn.

In the following series of six blog posts I’ll look at how I’ve optimized my ability to learn based on the themes of learning from first principles, learning by doing, thinking with external representation, learning through community, learning through imitation, and the realization that ultimately all learning is self-guided. 

In full disclosure, I have invented none of these concepts and not everyone agrees that these ideas completely model the learning experience. I’m just standing on the shoulders of giants and organizing useful knowledge, in a general way, as best as I can.

I’d love to hear about your experiences with learning so that I can better refine my own ideas (which, BTW, is an example of community learning). 

Categories
Uncategorized

Hidden Bluetooth Menus in macOS Big Sur

Last night my magic keyboard developed a bad case of typing lag. As I was coding in Xcode I observed a huge delay (in seconds!) between pressing a key and its corresponding character appearing on the the screen. 😖

IT Skills Activate

To diagnose and narrow down the problem (Xcode keyboard processing? A rogue process running on my Mac? Bluetooth bug in Big Sur? The keyboard itself?) I did the usual: Googled and looked for typing latency with various apps and keyboards. I isolated and consistently reproduced the problem to the Magic Keyboard. With a wired connection to the Mac there was no lag but over a Bluetooth connection the keyboard was stuttering!

This blog post by Scott Wezey (some guy on the Internet generous enough to share his experiences) seemed to help: https://scottswezey.com/2020/04/12/mac-bluetooth-lag/

Well, in full disclosure, the problem went away all by itself before I got to try any of Scott’s suggestions. I hate when that happens! There is a kind of uncertainty principle of hardware system debugging where just closely observing the problem makes it dematerialize. I’ve found that patiently disconnecting, reconnecting and then if that doesn’t work, rebooting (DR&B) makes 99% of all hardware problems run and hide. I suspect some cache was cleared or some quantum flux was flexed. What ever it was the problem was it is now solved or hibernating and I’m happily typing this blog post with normal latency.

Option-Click Treasure

But Scott did remind me that the macOS is a treasure trove of hidden menus! Holding option while clicking on the Bluetooth icon in the Mac menubar yields additional options! These options are generally features for hardware admins and power users. For example, the clicking on the Bluetooth menu yields a list of connected devices. Command, option, and shift clicking icons (in different key combinations) reveals different sets of features.

Clicking (with the primary mouse button) shows a short list of previously and currently connected devices (with battery level), the ability to toggle Bluetooth on and off, and a link to Bluetooth system preferences.

macOS Big Sur Bluetooth icon menu.

Option-clicking reveals the list with much more diagnostic info: MAC address, firmware version, Role (client/server), and RSSI (signal strength). With this info a good hardware admin can resolve problems with distance and configuration. Well, except that modern Bluetooth devices automatically configure themselves, so really all you can do is DR&B.

macOS Big Sur Bluetooth icon option click 
menu.

Option-shift clicking reveals even more: three utility commands that manually do the work of configuration: Reset, Factory reset, and remove all. Reset is basically DR&B. Factory reset returns the device to it’s just out of the box state. Remove all disconnects all connected bluetooth devices. This last option is a great way to sweep away the cruft of poorly connected Bluetooth devices that might be interfering with each other (or spying on you).

macOS Big Sur Bluetooth icon option-shift click 
menu.

DR&B FTW

The moral of this tale is that when you’re experiencing Bluetooth issues do the option-shift click on the menubar icon if DR&B doesn’t work. You might find that a keyboard and mouse are in conflict or a ghost connection is haunting your Mac!

Oddly the Bluetooth system preferences doesn’t have the admin tools that option-shift clicking reveals. Maybe all this is in an Apple support manual. I can’t seem to find it!

I’ve started a GitHub repo to collect these hidden gems, not just for Bluetooth, but for everything that macOS provides. Please contribute with a pull request!

Categories
Nerd Fun Tech Trends

Mac Pro

Search for “Mac Pro” and you’ll get this article, You probably won’t be buying a Mac Pro this year, this video, Do I Regret buying the Mac Pro? 3 Weeks later.., and this Quora question, Is the New Mac Pro worth the price?

The conventional wisdom is that Mac Pro is expensive, for professionals only, over powered, and there are better options from Apple for consumers and business users.

I don’t agree. Don’t get me wrong, if you need a computer for today’s challenges, then these helpful explainers on the Internet have good points.

  • The Mac Pro is pricy if you all you’re doing is web browsing, emailing, and game playing.
  • The Mac Pro was definitely designed and built for professional video producers all the other professionals who need multiple CPU core and GPUs to get their jobs done.
  • The Mac Pro is hard to push to its limits. Its hardware and software are so well written and integrated that most of time the professionals see only a small percentage of their CPU and GPUs utilized.
  • There are better options for consumers, business people, and even software developers (like me). MacBook Pro, iMac, and even Mac Mini are powerful and well suited to the typical computation required by word processors, spreadsheets, image editors, and developer tools.

But I have a problem with all of the above. When I bought a Mac Pro, I didn’t buy it just for my today problems. I bought it for my tomorrow problems as well.

Because the Mac Pro is a workstation-grade computer that runs cool it’s going to last a long, long time. Heat and the build up of dust are the enemies of computer durability. Computation creates a lot of heat and that heat warps computer components. Heat also attracts particle in dust that start to stick to these components. I don’t know about you but my personal computer runs 24/7 (like I do). I don’t every want to turn it off because I’m always in the middle of two or three mission critical projects.

Because the Mac Pro is modular and design by Apple to be easy to upgrade it can be a computer for many different types of users. I’m not the kind of professional that is going to chew through 28 CPU cores and 1.5 terabytes of data (ordinarily). This is why I bought the entry level Mac Pro with 8 CPU cores, one GPU, and 1/4 quarter of a terabyte storage. Today, I’m a lightweight. Once in a while I edit a video or render a 3D model. Usually I write words, draw diagrams, present slides, and compile code. Tomorrow is another story. Maybe I’ll get into crypto or machine learning; Maybe I’ll get into AR or VR; I don’t like limits. I don’t like to buy computers with built-in limitations.

It is true that I am not pushing Mac Pro very hard at the moment. But Mac Pro is much faster than the Mac Mini I replaced. Geekbench says that a far less expensive Mac Mini is faster for single core work than an entry-level Mac Pro. I’m sure those benchmarks are true. But software doesn’t work with just a single core any more. Almost all modern software uses multiple threads of execution to save time and boost performance. Your web browser does this when loading a page and rendering images or paying video. Your word processor does this. Your developer tools do this. Everything I do with my Mac Pro happens faster than it did with my Mac Mini. I’m getting more done and spending less time waiting for files to load, images to render, and code to compile. Maybe its only 10% faster but over time that timesaving adds up.

It is true that I don’t use Mac Pro for every task. Sometimes I’m on the road (although not recently because of this virus situation) and a MacBook Pro is the only option. Sometimes iPhone or Apple Watch, or iPad Pro is the better option. But when the task requires me to sit for hours in the same position in the same room Mac Pro is the best option. Now that I have a Mac Pro I realize I was misusing my other computers. iPhones are not great for writing 70-page documents. You can do it but it’s not great.

Most of my life I felt had to go with the budget option. But I’ve always found the budget option to be barely worth it over the long run. If I keep this Mac Pro for five to ten years it will become the budget option. Otherwise, the budget option is to buy a cheap computer every 2-3 years. Over time the costs of those cheap computers start to add up to serious money.

Yes, it’s a risk to bet that Mac Pro will last for and still be relevant for five to ten years. Won’t we have quantum computers with graphene nanobots by then?

Maybe, but I (most likely) will still be using the same von Neumann type of computer in ten years that was I using ten years ago. I think most us will continue to use personal computers for work and play just as we will still need to type with our fingers and see images on a screen with our eyes.

Based on my analysis (see below) a Mac Pro gets less expensive over time as its upgrade components fall in price and the cost of a total replacement is avoided.

Mac Pro Cost Projection over 10 years
Mac Pro cost projection over 10 years vs. custom built PC and Dell

In the pass I’ve found I’ve needed a new computer every two years. Why? The applications I use get more sophisticated, the components become outdated, and there are security flaws that need to be addressed that the OS alone can’t fix. And sometimes the computer just freezes up or fizzles out. With Mac Pro I’m betting that instead of replacing it every two years I’ll be able to update it, as needed, and that Apple and the industry’s storage, memory, CPU, and GPU prices will continue to fall (Moore’s Law).

In 1987 I bought a Macintosh II for almost the same price that I paid for the Mac Pro in 2020. Like Mac Pro that Mac II was an expandable powerhouse. It helped launch my carrier in software development. It didn’t last me 10 years (it was not as upgradable and modular as Mac Pro) but I got a good five years out of it. It was a huge expense for me at the time but as time wore on it was completely worth it. Those were five years when I had a computer that could do anything I asked of it and take me, computationally speaking, anywhere I needed to go.

Categories
Nerd Fun

RAM Disk

Slow Processing

I’m writing a book. A “user guide” for a side project. This book is ballooning to 50+ pages. You would think that today’s modern work processors could handle 50+ pages with the CPU cores, RAM, and SSD drive space at modern desktop computer’s beck and call. That is what I thought. I was mistaken.

I started writing this book with Google Docs. After about 20 pages responsiveness became less than snappy. After about 30 pages the text insertion point (you might call it a cursor) become unaligned with the text at the end of the document.

This is not Google’s fault. Google Docs is a tour de force of HTML5 and JavaScript code that plugs into a web browsers DOM. It works amazingly well for short documents of the type that you would create in a homework or business environment. But my book is a tough cookie for Google Doc. I had subscripts and superscripts, monospaced and variable-spaced fonts. I had figures, tables, page breaks, and keep-with-next styling. In today’s modern WYSIWYG Unicode glyph word processing world it’s tough to calculate line lengths and insertion point positions the deeper into the document one goes.

So naturally I reached for my trusty copy of Microsoft Word. This is MS Word for Mac 16.35. I have been a proud owner of MS Word since the 1990 when I knew members of the Mac Word engineering team personally.

Word handled the typography of my now 60-page document without any WYSIWYG errors. But it was sweating under the heavy load of scrolling between sections, search and replace, and my crazy non-linear editing style. Word was accurate but not snappy.

I read that many writes prefer to use ancient DOS or UNIX-based computers to write their novels. Now I know why. I want the whole document loaded into memory at once. I need to fly through my document without speed-bumps or pauses as it’s chunks are loaded and unloaded from disk into RAM. But I also want typography turned on and accurate. I’m not writing a novel with only words painting the pictures in the reader’s mind. I writing a technical book about algorithms and I need to illustrate concepts that quickly become jargon salad without visual representation.

Fooling the Apps

Then a solution out the DOS and UNIX past hit me! I needed a RAM disk to accelerate Word. A RAM disk is a hard disk made not of spinning disk drive or even solid state drive but of pure volatile RAM!

There are several types of memory available to your operating system classified by how fast and reliable they are. Your CPU can access caches for popular instructions. Your apps can access physical and virtual memory for popular chunks of documents. Your operating system can access local and remote storage to load and save files. In modern computer systems tricks are used to fool apps and operating system into think that one kind of memory or storage is some other kind.

This is what a RAM disk is. It a kind of trick where the operating system mounts a volume as a normal hard disk but that volume is a temporary illusion. When you turn off your computer a RAM disk disappears like rainbow when the air dries up.

RAM disks are risky, because your computer could lose power at any moment, but they speed up applications like Word. Word was originally written in the days when memory was limited and typography was simple. Large documents could not fit into the RAM available. Word evolved to page parts of a document, that we’re not being used, in and out of memory behind the scenes, to make from for the part of the document being edited. This clever scheme made working on documents hundreds of pages long while displaying their contents with multiple styles and dynamic features like ligatures and spelling markup.

But why do I need to fool Word into thinking the disk it is running on is one kind of medium when it is another?

App Traditions

It’s been more than a decade since RAM got cheap and Unicode become standard. But most computer operating systems and applications are still written in the old paradigm of scarce memory and plentiful storage.

Most word processing mavens will tell you that hard disks are super fast these days and most computers have more RAM than they really need. And this is true, mostly. But try to get your apps to take advantage of those super fast disks and plentiful RAM! It’s not easy!

As a test I tried to use all 32 GB of RAM in my Mac mini. I loaded every app and game on drive. I loaded every large document and image. I loaded all the Microsoft, Apple, and Adobe apps. The closest I could get was 22 GB. There was this unapproachable 10 GB of RAM that I could not use. The operating system and all these app were being good collaborative citizens! They respectfully loaded and unloaded data to ensure that 10 GB was available in the case of a memory emergency. I had no way to tell these apps it was ok to be rude and pig out on RAM.

I had to fool them!

App Acceleration

To create a RAM disk in macOS you need to be familiar with UNIX and the Terminal. You don’t need to be an expert but this is not for the faint of heart. This GitHub Gist explains what you need to do. I created a 10 GB RAM disk with that unapproachable 10 GB squirreled away in my Mac Mini with the following command line:

diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nobrowse -nomount ram://20971520 ` 

10 GB is enough to run most apps and their docs but not for the big AAA games or Xcode. 10 GB was more than fine for Word and my 60-page document.

10.75 GB RAM disk with MS Word and two docs.

The results have been amazing. Word rides my document like a Tesla Roadster as I jump around editing bits and bytes in my non-linear, unpredictable fashion.

After each editing session I just drag my documents to a safe location on my hard disk. I almost never need to reboot or turn off my Mac Mini. macOS Catalina has been rock solid for me. I’ve not lost any work and the RAM disk just hangs around on my desktop like a regular disk.

When I get around to it, I will write a script to create and load up the RAM disk and save the work with a short cut. This setup has been so stable that I’m not any hurry.

Now I want to test a Mac that with hundreds of GB of RAM. An iMac can be loaded up with 128 GB! A Mac Pro can handle up to 1.5 TB! A RAM disk might be a much bigger performance improvement than an SSD drive or a fast CPU with a dozen cores. And GPUs are not much help in processing text or numbers or even slides!

Categories
Uncategorized

Virus and Science

Illustration by Henrique Alvim Corrêa, from the 1906 Belgium (French language) edition of H.G. Wells' "The War of the Worlds", 1906. A scan from the book.

Like many, my life has been disrupted by this virus. Honestly, I don’t want to even acknowledge this virus. The only virtue of the Coronavirus is that should be widely apparent that we, humanity, are all in the same boat and that boat is fragile.

In the The World of the Worlds, written in 1872, HG Wells wrote about a technologically advanced species invading the earth and destroying its native inhabitants. No forces the earthlings could muster could stop the aliens and their machines. In the final hour, when all hope for the Earth was lost, the “Martians-dead!-slain by the putrefactive and disease bacteria against which their systems were unprepared; slain as the red weed was being slain; slain, after all man’s devices had failed, by the humblest things that God, in his wisdom, has put upon this earth.”

I just want to note that in the world of today we are the Martians. We are technologically advanced, bent on remaking the world, and yet somehow unprepared for the task.

I believe we are unprepared because our political, business, and cultural systems have not kept up with the advances of technical change. I do not believe we should go back to living like hunter-gatherers or the Amish (even the Amish get vaccinated these days). I do believe we should take a breath and catch up with our creations.

The Coronavirus was not created by technology (in spite of the conspiracy theories). Mother Nature is just doing what she always does, evolving her children and looking for opportunities for genetic code to succeed. This is evolution in action and we see it with antibiotic resistant bacteria and the rise of insulin resistance in modern humans. One is caused by how quickly microorganism evolve and the other is caused by how slowly macro-organisms evolve.

We have the science and technology to handle pandemics as well as antibiotic resistance and all the rest, but we have to listen to scientists and doctors. I know that sometimes, science and medicine seems to go against common sense, contradict long and deeply held personal beliefs, and has a habit of changing as new data comes in. This makes science and medicine vulnerable to ridicule, misuse, and misunderstanding.

If we start listening to scientists and doctors, instead of second guessing and villainizing them, species-level problems like pandemics, antibiotic resistance, and global warming will not go away, but we will be able to flatten their curves. If we don’t stop acting like science is just one of many sources of truth, even through we are mighty Martians, we will be felled under the weight of our own ignorance.

In The Age of Louis XIV Will and Ariel Durant wrote about the rise of science from 1648 to 1715, “Slowly the mood of Europe, for better or worse, was changing from supernaturalism to secularism, from the hopes of heaven and fears of hell to plans for the enlargement of knowledge and the improvement of human life.”

Are we stuck in the 17th century or can we move on and accept that we’re living in the 21st?

Categories
Management & Leadership

No Modes

Larry Tesler died this week. He was one of my idols at Apple Computer in the 1990s. A brilliant thought leader and champion of the idea that modes are a bad user experience.

A mode is a context for getting work (or play) done. In the early days of computers, before graphical user interfaces, applications were broken into “operational modes” such as edit, navigate, and WYSIWYG. Key commands would perform different actions in different modes. To be a great computer user, you had to memorize all the modes and all the corresponding key sequences. Modality made software easier to write but made computers harder to learn and use.

Larry Tesler was a visionary who focused on making the software do the hard work and not the user. The Apple Lisa, Macintosh, and Newton were great examples of modeless computing-as was Microsoft Windows.

Some folks, developers like me, will tell you that modal software is better. Once you get over the hurtle of memorizing the modes and commands, your fingers never have to leave the keyboard. And with modal software, as they will enthusiastically explain, you can easily perform power user operations like repeating commands and global pattern matching. I think this is true for software developers and maybe true as well for lawyers or novelists. Modal tools like Emacs and Vim make big file tasks fast and simple.

The alternative to modal software for large document management is something like MS Word. Many users think MS Word is bloated and slow. Given all that MS Word does modelessly it’s is a speedy racer! Most of us don’t need the power of MS Word (or Emacs or Vim) everyday.

You can thank Larry Tesler for championing the idea that modes are not required for most users most of the time. Thus you can grab your phone and just start typing out a message and get a call without saving your message. After the call is complete you can go back to your typing. If you want you can multitask typing and talking at the same time (hopefully you are not driving).

Behind the scenes your phone is doing an intricate dance to enable this apparent modelessness. The message app is suspended and the message is saved just encase the app crashes. The call app comes to the front and takes over the screen. During the call you can return to the message app while the call is running in the background. Other apps suspend to make room for the message app and call app to operate at the same time. Before Larry Tesler it was not uncommon for the user to have to do all this coordination manually.

To enable modeless software, applications have to share resources and the operating system has to help apps know when and what to do. In the old days this was called “event driven multitasking”. Now it’s just called software development.

How did Larry accomplish all this? Well, he wasn’t alone. But he worked hard, advocating for the user at Apple even when the cost of modeless software drove up costs. He even had a few minutes to spend with a junior employee like me. He wanted to make sure I understood the value of a great user experience. And it worked! I supported OpenDoc, the ultimate modeless user experience, and I made sure we had a version of ClarisWorks based on it. But alas the Macintosh (or PC) computers of the mid 1990s just could not handle the complexity of OpenDoc and it never shipped.

Still, to this day, I am grateful to Larry and the whole Apple Computer experience. It is the ground upon which I stand.

Categories
Uncategorized

XML and Immortal Docments

I just read Jeff Haung’s A Manifesto for Preserving Content on the Web. He made some good suggestions (seven of them) to help keep web content available as technical progress works hard to erase everything digital that has gone before.

I don’t know if everything published to the web deserves to be saved but much of it does and it’s a shame that we don’t have some industry standard way to preserve old websites. Jeff notes the that Wayback Machine and Archive.org  preserve some content but are subject to the same dilemma as the rest of web–eventually every tech dies of it’s native form of link rot.

For longer than I care to admit (11 years!), I’ve been posting my own thoughts to my own WordPress instance. But one day WordPress or me will depart this node of existence. I’m considering migration to a hosted solution and something like Jekyll. That may well postpone the problem but not solve it. I could archive my words on a CD of some sort. But will my decedents be able to parse WordPress or Jekyll or any contemporary file format?

While I like the idea of printing PDFs to stone tablets from a perversity stand point what is really needed is a good articulation of the problem and a crowd sourced, open source solution.

Jeff’s first suggestion is pretty good: “return to vanilla HTML/CSS.” But what is version of HTML/CSS is vanilla? The original version? The current version? Tomorrow’s version? That is the problem with living tech! It keeps evolving!

I would like to suggest XML 1.1. It’s not perfect but its stable (i.e. pretty dead, unlikely to change), most web documents can be translated into it, and most importantly we have it already.

I know that XML is complex and wordy. I would not recommend XML for your web app’s config file format or build system’s make file. But as an archiving format I think XML would be pretty good.

If all our dev tools, from IDEs to blog editors, dumped an archive version of our output as XML, future archaeologists could easily figure out how to resurrect our digital expressions.

As an added bonus, an archive standard based on XML would help services like Wayback Machine and archive.org do their jobs more easily.

Even better, it would be cool if we all chip in to create a global XML digital archive. An Esperanto for the the divergent digital world! We could keep diverging our tech with a clear conscious and this archive would be the place for web browsers and search engines to hunt for the ghosts of dead links.

Now there are all sorts of problems with this idea. Problems of veracity and fidelity. Problems of spam and abuse. We would have to make the archive uninteresting to opportunists and accept some limitations. A good way to solve these type of problems is to limit the archive to text-only written in some dead language, like Latin, where it would would be too much effort to abuse (or that abuse would rise to the level of fine art).

What about the visual and audio? Well, it could be described. Just like we (are supposed to do) for accessibility. The descriptions could be generated by machine learning (or people, I’m not prejudiced against humans). It just has to be done on the fly without out human initiation or intervention.

Perfect! Now, everything time I release an app, blog post, or video clip, an annotated text description written in Latin and structured in XML is automagically archived in the permanent collection of human output.