Trolls Are USA

It’s clear that Americans are more divided than ever. Our self-segregating tendencies have been reinforced by the adoption of Internet technologies and algorithms that personalize our newsfeeds to the point that we walk side-by-side down the same streets in different mental worlds.

Before the web, before iPhone, Netflix, and Facebook, the physical limits of radio, television, and print technology meant that we had to share. We had to share the airwaves and primetime and the headlines because they were limited resources.

In the pre-Internet world print was the cheapest communication to scale and thus the most variable. Anyone with a few hundred bucks could print a newsletter but these self-published efforts were clearly inferior to the major newspapers. You could tell yellow journalism from Pulitzer winners just by the look of the typography and feel of the paper in your hands. This was true with books and magazines as well. Quality of information was for the most part synonymous with quality of production.

To put on a radio or TV show you had to be licensed and you needed equipment and technical skills from unionized labor. Broadcast was more resource intensive and thus more controlled than print and thus more trusted. In 1938 The War of Worlds radio drama fooled otherwise skeptical Americans into believing they were under attack by Martian invaders. The audience was fooled because the show was presented not as a radio play but a series of news bulletins breaking into otherwise regularly scheduled programming.

The Broadcast technologies of the pre-social media world coerced us into consensus. We had to share them because they were mass media, one-to-many communications where the line between audience and broadcaster was clear and seldom crossed.

Then came the public Internet and the World Wide Web of decentralized distribution. Then came super computers in our pockets with fully equipped media studios in our hands. Then came user generated content, blogging, and tweeting such that there were as many authors as there were audience members. Here the troll was born.

Before the Internet the closest we got to trolling was the prank phone call. I used to get so many prank phone calls as high schooler in the 1970s that I simply answered the phone with a prank: “FBI HQ, Agent Smith speaking, how may I direct your call?” Makes me crack up to this day!

If you want to blame some modern phenomenon for the results of the 2016 presidential election, and not the people who didn’t vote, or the flawed candidates, or the FBI shenanigans, then blame the trolls. You might think of the typical troll as a pimply-faced kid in his bedroom with the door locked and the window shades taped shut but those guys are angels compared to the real trolls: the general public. You and me.

Every time you share a link to a news article you didn’t read (which is something like 75% of the time), every time you like a post without critically thinking about it (which is almost always), and every time you rant in anger or in anxiety in your social media of choice you are the troll.

I can see that a few of my favorite journalists and Facebook friends want to blame our divided culture, the spread of misinformation, and the outcome of the election on Facebook. But that’s like blaming the laws of thermal dynamics for a flood or the laws of motion for a car crash. Facebook, and social media in general, was the avenue of communication not the cause. In technology terms, human society is a network of nodes (people) and Facebook, Google, and Twitter are applications that provide easy distribution of information from node to node. The agents that cause info to flow between the social network nodes are human beings not algorithms.

It’s hard not to be an inadvertent troll. I don’t have the time to read and research every article that a friend has shared with me. I don’t have the expertise to fact-check and debunk claims outside of my area of expertise. Even when I do share an article about a topic I deeply understand, it’s usually to get a second opinion.

From a tech perspective, there are a few things Facebook, Google, and Twitter can do to keep us from trolling each other. Actually, Google is already doing most of these things with their Page Rank algorithms and quality scores for search results. Google even hires human beings to test and verify the results of their search results. Thus, it’s really hard for us to troll each other with phony web pages claiming to be about cats when dogs are the topic. Kudos to Google!

The following advice is for Facebook and Twitter from admiring fan…

First, hire human editors. You’re a private company not a public utility. You can’t be neutral, you are not neutral, so stop pretending to be neutral. I don’t care which side you pick, just pick a side, hire some college educated, highly opinionated journalists, and edit our news feeds.

Second, give us a “dislike” button and along with it “true” and “false” buttons. “Like” or “retweet” are not the only legitimate responses that human beings have to news. I like the angry face and the wow face but those actions are feelings and thus difficult to interpret clearly in argumentation and discourse. Dislike, true, and false would create strong signals that could help drive me and my friends to true consensus through real conversations.

Third, give us a mix of news that you predict we would like and not like. Give us both sides or all sides. And use forensic algorithms to weed out obvious trash like fake news sites, hate groups with nice names, and teenagers pretending to be celebrities.

A/B test these three ideas, and better ones, and see what happens. My bet is social media will be a healthier place but a small place with less traffic driven by the need to abuse each other.

We’ll still try to troll the hell out of each other but it will be more time consuming. Trolling is part of human nature and so is being lazy. So just make it a little harder to troll.

Before social media our personal trolling was limited to the dinner table or the locker room. Now our trolling knows no bounds because physical limits don’t apply on the Internet. We need limits, like spending limits on credit cards, before we troll ourselves to death.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

Faceless Phone

About twelve years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants will become jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do. But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!

Code Markup in Xcode

Screen Shot 2016-05-28 at 12.58.13 PM

I’m working on a fairly large Swift project. Actually no, that’s not quite true. I’m working on a Swift project with a ViewController file that is getting disorganized and out of control. If this keeps up I might have a large project on my hands but right now it’s just a single file that is getting larger than I would like.

Apple provides some quick and dirty tools that make it easy to navigate a single file with specially formatted comments in your code. This functionality doesn’t provide automated documentation like Headerdoc. And that’s fine with me. I like how Headerdoc has become a mash up of Markdown and JavaDoc. My code is just not stable enough for documenting yet.

Happily Xcode’s built-in special comment parser is enough in the early stages of development to help me navigate a large file and remember where the bodies are buried.

Xcode supports the following out of the box:

  • MARK: (your text here)
  • MARK: – (section divider)
  • ???: Question
  • !!!!: Warning
  • TODO: Task
  • FIXME: Bug

Xcode’s special comments mark up the function navigation  pop-up menu so that you can find your questions, warnings, tasks, and bugs in your code without a overtaxing your the private neural network in your skull. Unfortunately you can’t add new special comments and they don’t show up in the Symbol Navigator.

(Using the MARK: comment you can simulate adding your own special comments. MARK: doesn’t add the word MARK: in front of navigation items in the way that the other special comments do (TODO, FIXME, etc.). So you can use MARK: NOTE to navigate to notes in your Swift code if that makes you happy.)

I use the following additional special comments to keep my code organized and consistent. (Xcode will just ignore them unless I prefix each with MARK:)

  • NOTE: (when the function name is not enough)
  • HINT: (a non-obvious reminder about a bit of code)
  • DBUG: (end of line comment marking code that probably should be removed eventually)
  • DEMO: (example usage)

It would be nice if Apple allowed us to personalize code markup in Xcode. But only after search and ranking in the App Store are fixed and a 1000 other higher priories are done!

The Quiet Car

I ride a commuter train to and from work everyday and occasionally I accidentally, regrettably, end up sitting in the quiet car.

If you’re not a commuter you might be unacquainted with the idea of a quiet car. It is what it says it is: a train car where you are supposed to be quiet. No talking. No phone ringing. No music leaking out of your headphones. I call it the train car of silent tension.

A few years ago NJ Transit declared the first and last cars of all morning and evening commuter trains to be quiet cars. They had little signs printed up that read “Quiet Commute” with the “mute” in “commute” highlighted.

I don’t think NJ Transit invented the idea of the quiet car. But their conductors and passengers, well some of them, love to enforce it. Violate the rules in the quiet car and several self-appointed quiet car monitors will put you in your place with a tone of voice that is so sternly condescending that your victorian great grandmother would be right at home.

My problem with the quiet car is that somebody always breaks the rules and gets scolded. And I’m just not the sort of guy who enjoys the sight of one human being being a righteous jerk to another human being. The quiet car is the only place I’ve ever been where it’s ok for adults to act like conceited little kindergarteners.

I can’t concentrate or relax in the quiet car because I’m just waiting for some poor oblivious victim to innocently answer a call, make a comment to a friend, or forget to turn the volume down on their phone.

I think people ride the quiet car not for the quiet but for the chance to rebuke the guilty who transgress the sacred decree of the car of silence. “Thou hast made a peep and thou shalt be most vigorously censored!”

I only ride the quiet car when I have no choice, when the rest of the train is full, when I find myself in not so quiet desperation for a seat.

I’d like to observe that quiet cars were probably a great idea in the 1950s or 60s. But now we have inexpensive headphones. Instead of making everyone uncomfortable you can just pop a pair of headphones on your cranky victorian-minded gray haired noggin and listen to soothing national anthems or the sounds of suburban lawns growing. With the marvelous invention of headphones you can allow the rest of us to catch up with a friend, take an important call, or just take a nap without having to fear a sudden outburst of “Sir! Sir! Miss! Miss! This is the QUIET CAR! You can’t talk here! No Talking!”

But the way, I just want to point out that the quiet car is not only elitist but kind of classist and racist as well. Almost always the rule breaker is Italian or from a non-Waspy culture where talking is what you do when you are sitting next to a friend or family member. But in the quiet car the uptight, my-ancestors-are-better-than-your-ancestors, people rule.

If we must have a quiet car, and it seems they are not going away, then I must insist that we have a shouting car. It’s only fair. In the shouting car people can let out all that tension built up from riding in the quiet car and even TYPE IN ALL CAPS while texting.

 

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Binge Watching Handmade Hero

Screenshot (1)

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.

 

Most Improved Award for Windows 10

If there was an award for most improved in the world of tech I would award it to Windows 10. While I am a daily Mac user, I am no stranger to Windows. Actually, let me correct myself. I live inside iOS, work in Mac OS, play around on Windows, and occationally find need of an Android device. I think that makes me a good judge of where Windows 10 sits in comparison to all the major operating systems offered today. (Linux, yes I used to be into you, but Mac OS is more than enough UNIX for me.)

I’m old enough to remember when Macs were relegated to the less serious passions, graphics and science labs, while Windows machines were the sturdy beasts that bore our burdens during work. Ironically the situation seems to be reversed. If I have a job to do, that can’t be done on a phone, I need a Mac. If I want to fool around in virtual reality or inside an MMO at 60 FPS, I need a Windows PC. Windows 10 is Microsoft’s near miss at reclaiming the dull and boring world of the workhorse personal computer.

I had reason to buy a non-gaming PC laptop last week. I’m following along with Handmade Hero and since Casey Muratori is using a Windows machine to demo how to write a game from scratch I wanted to do the same. Via Amazon I bought a decent Dell XPS 13, refurbished, at a 50% discount. It’s a lot like a MacBook Air: Light, beautiful no-touch screen, and well constructed feel. The keyboard is a little loose as compared to a MacBook. And like a MacBook Air the graphic card and CPU are under powered but it’s totally usable for software development and the processing of words, numbers, emails, and webpages. This blog post is being written on it.

Windows 10 is Microsoft’s response to Mac OS and iOS. And it’s pretty easy to see that Apple is watching closely what Microsoft is doing with Windows 10 and discovering new ways to improve Mac OS and iOS. However, Redmond has to do a better job of learning from Cupertino.

Windows 10 is innovative and interesting but has many odd holes, rough patches, and weird leftover bits from Windows of the past. It feels rushed and as if there is only a small band of engineers behind it. It’s a tad ugly as if the UX designers called out sick a few days before polishing the new look and feel. If I wasn’t a 30 year veteran of Windows and PCs I’d be lost and confused when it comes to navigating around and installing software. As it is, I’m “Binging” basic operations where on the Mac I’d just be able to wing it.

Let me give you a concrete example…

Windows 10 has a system wide spell checking feature. While I was typing this blog post, in the sleek Edge web browser using the web-based WordPress text editor, I had to turn off Windows 10 spell checking. It was underlying entire paragraphs with red wavy lines! And yet I still have spell checking. So who is doing the spell checking if I turned it off? A mystery!

Another mystery is that at first I could not find the place to turn off spell checking in the Windows 10 Setting panel. I had to ask Cortana. She’s a nice lady and all but I pride myself on being able to find things in computer operating system. I now know that spell checking is found under Settings->Devices->Typing. What threw me was “devices” (that makes me think of something like a printer, a separate device) and the lack of the term “keyboard” anywhere in the UX.

It’s as if the person who designed the Windows 10 Settings panel is a young AI just figuring out object from subject and parts from wholes. I keep running into little stumbles like this along the way as use Windows. I’m sure there is a punch list at Microsoft with a thousand tiny little fixes that are not mission critical but would make a big difference in how the end-user’s experience of Windows 10 flows.

So, good job Microsoft. Better than I expected. Keep it up. I suggest hiring a really mean, obsessive, and uncompromising UX designer and putting her or him in charge of Windows 11.

Endangered Random Numbers

Like infinity, randomness is as easy to misunderstand as it is useful. As an added bonus infinity and randomness are interconnected. I don’t think you can have one without the other.

I’m not a mathematician but I like to think about numbers. Take a look at this series of integers: 31415926535

It might look pretty random if your not a number geek. It’s starts with “31”— the country code for the Netherlands. And the format for international phone numbers contains 11 digits. So it could be a phone number. But actually it’s one of the most famous numbers of all: Pi (3.1415926535…)

(Maybe it’s also a phone number for mathematician in Europe. I have not tested that theory.)

Pi only looks like a random bunch of digits because we’re expressing the ratio of a circle’s circumference to its diameter in integers and integers are bad at representing ratios. Some rations are easily represented by integers (like 1/2 which evaluates to 0.5) but many important numbers (like Pi, e, and the square root of 2) are simply unworkable with integers.

Actually there is one number base where Pi can be easily represented by integers! Base-Pi! In base-Pi (where we are counting place values by powers of Pi) Pi is expressed as 10. But then the other numbers, like 4, become irrational. Yikes!

Because of Pi and how hard it is to express (outside of a formula or the greek symbol π) I have begun to doubt than any string of numbers are usefully random. If you run into 31415926535 you might say “Aha! That is the number Pi! I know what the next number is! It’s 9!”

If you can predict the next number in a series of numbers then the numbers are not random, they are well ordered and governed by some principle or function.

So what about 3958391848594819348593?

I just made up that number by typing as randomly as possible on my keyboard. Is it random?

To me 3958391848594819348593 is pretty random. But maybe it’s ratio of an aardvark to a zebra? Or it’s a prime? (nope—it can be factored to 3 x 3 x 86441 x 508811). Or maybe I can guess the probability of the next digit by looking at the frequency of the digits that I typed.

To make my number I only used 1,3,4,5,8, and 9. And most of the time after a “3” I typed a “9.” Given this small sample size I’d say there is a 2:3 chance that if I had typed another digit it would have been “9” and a 1:3 chance it would have been a “4.” It’s good thing I don’t create my passwords by playing “kitty on the keyboard”.

If you use a computer algorithm as a random number generator you get “pseudo random numbers.” That is you get numbers that look random, and are nearly random, but are produced by a non-random process, and if you know the details of that process you can generate the same numbers again. Generally the way pseudo random numbers are generated is by using a “seed” value. If you know the seed value and the formula you have the number. So it’s not great for passwords or for sampling or for simulations.

To get real random number from a computer you have to some kind of noisy system like random.org does (they use atmospheric noise). But that real random number could turn out to look non-random and be useless. For example a true random number from random.org between “1960” and “2016” is “1990.” That is definitely a year and millions of people have it as the birth year of someone in their family. It’s probably overed-used as an ATM or smart phone PIN and easily guessed.

You can’t use any number as a secure PIN that looks like a date–even if you generated it from atmospheric noise! Four digit PINs are terrible. There only 10,000 of them (0000 to 9999) and hundreds of them look like non-random dates. 1492? 2001? 1066? All famous years to just about everyone.

In the end, to be really useful, a random number has to be generated in a as random a manner as available, it has to look and feel random, it has to be statistically random, it has to be unrelated to your person, and it can’t be so long that it’s hard to remember or work with.

I have an intuition that the actual amount of useful random numbers that fit the above criteria over time is approaching zero.