Swift Programming: Filtering vs For Loops

The current version 3.1 has come a long way from the Yet-Another-C-Based-Syntax of the 1.0 version of Swift.

One of the best features of Swift is how functional programming idioms are integrated into the core of the language. Like JavaScript, you can code in Swift in several methodologies, including procedural, declarative, object-oriented, and functional. I find it’s best to use all of them all simultaneously! It’s easy to become a victim of the law of diminishing returns if you try to stick to one programming idiom. Swift is a very expressive coding language and it’s economical to use different styles for different tasks in your program.

This might be hard for non-coders to understand but coding style is critical for creating software that functions well because a good coding style makes the source easy to read and easy to work with. Sometimes you have to write obscure code for optimization purposes but most of the time you should err of the side of clarity.

Apple has made a few changes to Swift that help with readability in the long term but remove traditional C-based programming language syntax that old-time developers like me have become very attached to.

The most famous example was the increment operator:

In modern Swift you have to write:

As much as I loved to type ++ to increment the value of a variable there was a big problem with x++! Most coders, including me, were using it the wrong way! The correct way for most use cases is:

Most of the time the difference in side effects between ++x and x++ were immaterial, except when it wasn’t and it created hard to track down bugs in code that looked perfectly ok.

So now I’m used to typing += to increment values even in programming languages where ++ is legal. (Also, C++ should rebrand itself as C+=1.)

Another big change for me was giving up for-loops for functional expressions like map, reduce, and filter. As a young man when I wanted to find a particular object in an array of objects I would loop through the array and test for a key I was interested in:

Nothing is wrong with this code—it works. Well, actually there is a lot wrong with it:

  • It’s not very concise
  • I should probably have used a dictionary and not an array
  • What if I accidentally try to change o or objects inside this loop?
  • If objects is a lengthy array it might take some time to get to 12345
  • What if there is more than one o with the id of 12345?
  • This for-loop works but like x++: it can be the source of subtle, hard to kill bugs while looking so innocent.

But I’ve learned a new trick! In Swift I let the filter expression do all this work for me!

In that single line of code o will be the first object that satisfies the test id == 12345. Pretty short and sweet!

At first, I found the functional idiom of Swift to be a little weird looking. By weird I mean it looks a lot like the Perl programming language to me! But I learned to stop being too idiomatic and to allow myself to express functional syntax as needed.

For you JavaScript or C programmers out there here is a cheat sheet to understanding how this functional filtering works:

  • let means o is a constant, not a mutable variable. Functional programing prefers constants because you can’t change them accidentally!
  • The { } represents a closure that contains a function and Swift has special syntactic sugar that allows you to omit a whole bunch of typing if the function in the closure is the last or only parameter of the calling function. (Remember in functional programming functions are first class citizen and can be pass around like variables!)
  • $0 is a shortcut for the first parameter passed to your closure. So you don’t have to bother with throw away names like temp or i,j,k,x, or y.
  • .first! is a neat way to get [0], the first element of an array. The ! means you know it can’t fail to find at least one element. (Don’t use the ! after .first unless you are 100% sure your array contains what you are looking for!)

I’m working on a new project, a game that I hope to share with you soon. The game itself won’t be very interesting. I find that I enjoy creating games more than I enjoy playing them so I’m not going to put too much effort in creating the next Candy Crush or Minecraft. But I will blog about it as I work thought the problems I’ve set for my self.

The Rise and Fall of Autocorrect

I’ve turned off “auto correction” on my iPhone and it’s a godsend. I still get predictive suggestions and spelling correction. But I no longer have to fight with autocorrect and end up with wrong but similar words in my emails and texts.

When the iPhone first arrived in eight years ago we needed autocorrect because we lost the keyboard. We were nervous anout the loss of physical targets for our thumbs to hit. Many early iPhone reviewers complained about the perils of “typing on glass” and I still see email signatures asking my forgiveness for the author’s use of a phone without buttons. 

After nearly a decade of glass typing my thumbs are well trained. I type almost as fast with two thumbs as I do with 10 fingers. Every once in a while I try one of these smart mini keyboard attachments and I’ve discovered I can’t type on a phone with real keys. Physical buttons sized to fit a modern smart phone form factor are just too cramped for my thumbs to fly like a virtuoso pianist.

Autocorrect has been slowing me down and embarrassing me for ages. It transforms “can’t” into “can” and non-western European names into insults. Autocorrect wants me to spell the name of my company, Viacom, in ALL-CAPS. I don’t understand that one. Maybe in world where Apple’s autocorrect text engine gets its data VIACOM is the correct way to type it. But not in my world. 

And that is the big issue with autocorrect. We each have our own style of spelling and grammar. These stylistic variation enrage the grammar police but give our text personality and nuance. Autocorrect enforces uniformity and hurts out ability to express our ideas in an idiomatic fashion that allows us to create personal and community languages. 

Humans are born as language creation machines. We develop new words that express our POV on both new ideas. We repurpose old words to express new concepts while referencing tradition. Autocorrect messes with our ability to say what we mean and mean what we say. 

I’ve been communicating without the mediation of Autocorrect for about a week now. I’m typing at about the same speed. I’m making less causal mistakes and “speaking” in my true voice. I’m not fighting with an annoying helpful AI trying guess at what I mean. The only downside so far is that my “I” are longer capitalized by magic. I had to relearn to tap the shift-key. 

— Typed without regrets on an iPhone with autocorrect disabled. All mistakes are my own. 

The Young President

 

I’m watching the Young Pope on HBO and I’ve been struck by the by the similarities to another man unexpectedly thrust into a position of power and world leadership. Yes, you got me right—our 70-year-old President Trump is acting like a young and inexperienced pope with a chip on his shoulder a mile long.

In the Young Pope, Jude Law plays Lenny, a delightfully terrifying iconoclast of the cloth. Early in the series the Cardinal Secretary of State discovers that Lenny, Pope Pius XIII, is a man of few carnal appetites. It’s won’t be easy to manipulate this young American with women or wine. This pope wants to exercise his power, clean up the priesthood, and bring God back into the center of the Church. Lenny isn’t warm and fuzzy. He’s not politically correct. He’s not here to entertain or comfort. This pope is on a mission.

Remind you of someone?

While not a perfect metaphor for the Trump Presidency the Young Pope exposes the tragedy and cruelty of the reformer. Both Washington and the Vatican are riddled with special interest groups, corruption, and actors acting in their own interests and not in the interests of the people they are accountable to. Both President Trump and Pope Pius XIII didn’t expect to be elected and it shows. They aren’t ready while at the same time aren’t willing to wait until they are ready to make important decisions. It’s as if both men fear they are imposters and need to prove their worth before they are found out.

The actions of President Trump and Pope Pius XIII have created grievous emotional suffering for the most vulnerable members of their “flocks.” President Trump seems to have missed the whole idea that America is nation of immigrants. Lenny seems to have missed the whole idea that the Church is a refuge from secular world and the absence of God. Both men want only the purest of the “faithful” to reside in their house. Both want outsiders to work much harder to get in.

Of course, the Young Pope is a television show. Nobody has actually suffered under Pope Pius XIII. While the Trump Presidency might seem like a reality show I don’t have to remind you that it’s all too real.

A Trump supporter, perhaps a member of the Alt-Right or just an average Conservative who wants America to be great again, might be justified in asking, “Well then, what is a reformer supposed to do? Sugar coat his speech? Drag out the process? Compromise?”

In a word, yes.

There is already enough instability and suffering in the world that both President Trump and the Young Pope would be more effective by wrapping that iron fist in a velvet glove. Change is tough. Abrupt change is recipe for anarchy. Both President Trump and the Young Pope are headed for a spiritual and earthly crisis. For Jude Law this might mean an Emmy Award. Unfortunately for President Trump, in the real world, this kind of drama is rewarded in a way that isn’t good for any of us.

I know it’s boring and undramatic but slow, steady, and compassionate reform is the only kind of reform that ever works. By the way, I’m not in personal agreement is any of President Trump’s executive orders or his world view. I just don’t want to world to come crashing down around our ears while President Trump figures out he’s not on a television set.

Our trade agreements, health care, and tax laws probably all need a bit of tweaking. Or a lot of tweaking. Unfortunately, it’s impossible to predict the impact of any particular change. I’m sure the Young Pope would agree that the road to hell is paved with good intentions.

I know the people who voted for Trump. In the small town in NJ where I grew up they were religious, down to earth, and only wanted a good job and good life. They didn’t ask for change they are tired of waiting while they watch their world fall part. Factories are closing, job are being distributed around the world, and a new wave of immigrants are setting up shop in strip malls.

But American is always changing. Two hundred years ago, slavery was in full force and the industrial revolution wiped out the craftsman. One hundred years ago, World War I was starting up and electricity and the radio we’re uniting the world into one global community. Nothing of what we today call “globalization” and “immigration” is new. It’s just all part of a trend in how humanity is organizing it’s self around technological progress.

I’m sure by the end of season one Pope Pius XII will realize that he’s only made things worse. That instead of restoring the Catholic Church to glory his hasty and not well-thought-out executive orders will have pushed it to the edge of ruination. This is not a spoiler. I ‘m still in the middle of the series. But I can see where the plot is going.

I hope President Trump has HBO.

 

North Star

Successful companies usually have a secret sauce. It could be an algorithm or an insight. But whatever that secret sauce is, it is used to create or disrupt a market.

Apple created the PC market when Steve and Steve figured out that affordable pre-built personal computers would be really useful for consumers. IBM disrupted the PC market that Apple built with the insight that a standard, expandable, business-oriented PC would be especially valuable to businesses. After a while Microsoft disrupted the disrupter with the key insight that PC resources, CPU speed, RAM size, and disk space, were essentially infinite according to Moore’s Law.

Yet secret sauce alone is not enough create or disrupt a market for very long. You might have a brilliant algorithm or insight but if you can’t focus on it and deliver it to your audience then you got nothing.

Secret sauces are a common and cheap. The ability to focus and deliver is rare and expensive!

Let’s take the case of Google. Larry and Sergey started Google with the idea of Page Rank. They turned that idea into a set of algorithms and code and turned it loose on the web. Suddenly Larry and Sergey had the best search engine on the market.

But Page Rank on its own didn’t create Google. This might be hard to believe today but when started Google it was an underdog. Google was the epitome of a scrappy startup that hardly anyone paid any attention to.

Luckily Larry and Sergey had something else: A north star.

I don’t know if they called it a “north star”. That’s what we call it now. They probably didn’t call it anything. Looking back, I think Larry and Sergey, Like Steve and Steve, and all successful market creators/disrupters had an intuitive sense of focus and delivery that was superhuman. They got everyone around them, investors, employees, and partners, to focus on search and to think hard about the best way to deliver search to the consumer. They followed their north start to the detriment of everything else including sleep, civility, and revenue.

Obviously it paid off. Once the nascent search market was disrupted Google attained all the things they had sacrificed. They made money. They decided to be really nice. They got a good night’s sleep.

I see this pattern repeating though out the boom and bust cycle of business. When a company is following it’s north star it eventually becomes successful. When a company is distracted or tries to follow too many stars it eventually fails.

When I worked at Apple in the 90s our north start was summed up in the question, “will it sell more Macintoshes?” If you could answer “yes” then you had tacit approval to do it. Don’t ask. Just do it. HyperCard, QuickTime, TrueType, Unicode, these are all examples of technologies that “sold more Macintoshes.”

At the time I was working on ClarisWorks for Kids. It was a bit like Microsoft Office for the K-12 market. Our theory was that productivity software tools for kids would sell more Macintoshes (to parents and schools) and so I was asked to go and do it. I didn’t fill out a product plan or forecast revenue. I just convinced a group of engineers that ClarisWorks of Kids was cool and off we went. I hired as many people as I needed. I figured out features and even helped design the box art. Since I had a north star, I didn’t have to be managed. My boss was more like my personal coach. I went to him for advice and not orders.

Since I had never shipped a product before I made a few mistakes. I didn’t get fired. As long as I was following Apple’s north star everyone had trust and confidence in what I was doing. And I wasn’t special. I was one of hundreds of Apple engineering managers leading projects in partnership with hundreds of engineers all following a single north star.

ClarisWorks for Kids turned out to be a big hit. We won some awards. More importantly we sold a lot of Macintoshes. ClarisWorks for Kids was part of an  educational bundle that filled up computer classrooms across the world with Power PC-based Power Macs.

But then we turned away from our north star.

In the late 1990’s Apple’s marketshare continued to slip. In spite of all our focus and smart insights we were not sell enough Macintoshes. Risc chips, CD-ROMs, and built-in digital signal processors were not cutting it with the consumer. Most people bought IBM compatible PCs that ran Windows.

Instead of doubling down on our north star or discovering a new north star we at Apple decided to pursue many different strategies. Sometime we would follow multiple strategies at the same time but usually it was a new strategy every month. Some of these new “stars” included “Mac is the best PC” and “Let’s find more ways to make money from our existing users” and “Apple is really a software company!” Ouch. None of these stars become north stars. They were more like fly-by-night comets that burnout by dawn.

Without a strong north star, I no longer manage myself. I had to be told what to do. Once day I was told to “port Claris Works for Kids to Windows.” I asked how this project would “sell more Macintoshes?” Apparently Apple wasn’t concerned about that old idea any more and frankly I had not been asked for an opinion.

So we gritted our teeth and cracked open the Windows 3.1 disks and started porting. It was kinda of fun and a huge technical challenge as the Mac programming model was very different from Windows. So we dug into it. As an engineering manager there wasn’t as much for me to do so I got into project plans and status reports. I don’t think anyone read them. At some point we were done. ClarisWorks for Kids could now run under Windows on IBM PCs.

This is the point where we were all laid off. Nothing personal. Business was bad, new management was in town (Steve was back), and Windows software was not needed. It didn’t “sell more Macintoshes” because it didn’t run on a Macintosh.

After we were gone Apple got back in the business of following it’s original and true north star. Mac computers become exciting again with bold design and a new UNIX-based operating system. (OK an old UNIX-based OS but it brought the goodness of UNIX to a mass market.)

ClarisWorks and ClarisWorks for Kids were gone but Apple replaced them with a suite of productivity tools. Pages, Keynote, and Numbers are the great-grandchildren of ClarisWorks. I don’t know if they “sell more Macintoshes” but they have some cool features. Besides, Apple’s north star now is probably “Does it sell more iPhones?” or something like that.

These days I work really hard to provide a north star to my teams and to advocate for a north star in my organization. A good north star is easy to understand and easy to remember. A great north star enables employees to mange themselves and renders budgets and project plans obsolete. An awesome north star fuels growth and turns businesses around.

 

Eternity versus Infinity

I just completed reading, at long last, Isaac Asimov’s The End of Eternity. Like many of his novels, EoE is a morality play, an explanation, a whodunit, and a bit of a prank. The hero Andrew Harlan, is a repressed buffoon at the mercy of various sinister forces. Eventually Harlan finds his way to a truth he doesn’t want to accept. In EoE Asimov plays with time travel in terms of probabilities. This mathematical exploration of time travel resolves many of the cliché paradoxes that scifi usually twists itself into. Go back in time and prevent your mother from meeting your father and what you have done is not suicide. You have simply reduced probability of your future existence.

In EoE Asimov considers two competing desires in human culture: The urge to keep things the same forever and the urge to expand and explore. Asimov distills these urges into the Eternals, who fight what they think of as dangerous change by altering time, and the Infinites, who sabotage the Eternals because they believe “Any system… which allows men to choose their own future, will end by choosing safety and mediocrity…”

In one masterful stroke Asimov explains why we haven’t invented time travel. If we did, we’d kill baby Hitler! But then we’d work on elimination of all risks! Eventually we’d trap ourselves on planet Earth and die out slowly and lonely when our single world gets hit by a comet or our Sun goes nova. In EoE, Asimov has a force of undercover Infinites working tirelessly to keep the probability of time travel to a near zero value. This way humanity continues to take risks, eventually discovers space flight, and avoids extinction by populating the galaxy.

You’re probably not going to read EoE. It’s a bit dry for the 21st century. There are no superheroes, dragons, or explicit sex. While there is a strong female character she spends most of her time out of sight and playing dumb. EoE is a product of the 1950s. Yet For a book, where a computer is called a “computaplex” and the people who use them are consusingly called “computers”, EoE’s underlying message and themes apply very closely to our current age.

In our time, we have the science and technology to move forward by leaps and bounds to an unimaginable infinite–and we’re rapidly doing so except when we elect leaders who promise to return us to the past and we follow creeds that preach intolerance to science. I’ve read blog posts and op-eds that claim we can’t roll back the future. But we seem to be working mightily to pause progress. Just like the Eternals in EoE many of us are concerned about protecting the present from the future. Teaching Creationism alongside Evolution, legislating Uber and AirBnB out of existence, and keeping Americans in low value manufacturing jobs are just a few examples of acting like Asimov’s Eternals and avoiding the risks of technological progress at all costs.

I get it! I know that technological advancement has many sharp edges and unexpected consequences. Improve agriculture with artificial ingredients and create an obesity epidemic. Improve communication with social media and create a fake news epidemic. People are suffering and will continue to suffer as software eats the world and robots sweep up the crumbs.

But what Asimov teaches us, in a book written more than 70 years ago, is that if we succeed in staying homogenous-cultured, English-speaking, tradition-bound, God-fearing, binary-gendered, unvaccinated, and non-GMO we’re just getting ready to die out. When the next dinosaur-killer comet strikes, we will be stuck in our Garden of Eden as it goes up in flames. As Asimov admits, it might take thousands of years for humanity to die out in our self-imposed dark ages, but an expiration date means oblivion regardless of how far out it is.

Asimov shows us in EoE, and in rest of his works as well, that there is a huge payoff for the pain of innovation and progress. We get to discover. We get to explore. We get to survive.

Let’s face it. We don’t need genetic code editors and virtual reality. We don’t need algorithms and the Internet of Things. Many of us will never be comfortable with these tools and changes. Many of us long for the days when men were men, women stayed out of the way, and jobs lasted for a lifetime. This is not a new phenomenon: The urge to return to an earlier golden age has been around since Socrates complained that writing words down would destroy the art of conversation.

At the moment, it feels like the ideals of the Eternals are trumping the ideals of the Infinites. While a slim minority of entrepreneurs tries to put the infinity of space travel and the technological singularity within our reach, a majority of populist politicians are using every trick in the mass communications book to prevent the future from happening. We have our own versions of Asimov’s Eternals and Infinites today. You know their names.

Like Asimov, I worry about the far future. We’re just a couple of over-reactions to a couple of technological advances away from scheduling the next dark ages. That’s not a good idea. The last dark ages nearly wiped Europe of off the face of the earth when the Black Plague hit. Humanity might not survive the next world crisis if our collective hands are to fearful of high-tech to use it.

At the end of EoE Harlan figures out that, spoiler alert, taking big risks is a good idea. Harlan chooses the Infinites over the Eternals. I’d like us to consider following in Harlan’s footsteps. We can’t eliminate all technological risks! Heck, we can’t even eliminate most risks in general! But we can embrace technological progress and raise the probability of our survival as a species.

Telling Time as an Engineer

Time is the most precious resource. It’s in limited supply, once spent we can’t get it back, and you can’t trade it directly. This might sound a little radical but most global, national, business, and personal problems, seem to me, to boil down to problem of time and who’s time is more important than yours.

Before we can decide how we should spend the time given us, we have to put some thought into the process of analysis of tasks which take time. In software development a considerable amount of thinking has been applied to just this analysis. We usually call it “process management” and “time management”. Many methodologies have been created to solve the problem of time and yet when the rubber hits the road the management of time, which includes task prioritization and effort estimation, is full of errors and random results.

A great example is the Agile Development Processes, which has become the standard as well as declared dead by many of its original creators. Why is this?

Here is a simple example…

A high priority story is pulled from a general backlog and estimated, along with other stories, by an experienced engineering team. A product owner then weighs the cost of the stories based on the effort estimations and value to the business and feed them into a sprint backlog. The engineering team then works on each story, in order of priority, and completes the required stories by the end of the sprint. The work completed is demoed to the stakeholders and everyone is happy as everyone’s time has been well spent.

Well, except this happy plan almost never happens.

Something like 80% of all feature and products are delivered late or not at all. And often when a feature or product is delivered its buggy enough that we regret delivering it on time if at all. I’m sure Samsung engineers are less concerned about deadlines these days and more concerned about taking the time to do their tasks with more quality. Blizzard has made a billion dollar business of never giving dates for games and missing them when they do. Facebook and Spotify just spring new feature on their users without any warning and kill bad ones before they spread beyond a small segment.

It my opinion successful tech companies don’t bother with time management and leave schedules and task estimates to unsuccessful tech companies. I’m not saying successful tech companies don’t do agile or create project plans. I’m saying these are more like historical accounts and data gathered for analysis than pseudo-predictive planning.

Why is task estimation so non-predictive?

The problem is that it’s impossible to know how long a task will take unless you have done exactly that task before. When I worked at Apple Computer (before it was just Apple) we said that in order to to understand how long a project would take you had to build it and then write the schedule.

This is why experienced engineering team is so important in effort estimation. If you get a group of engineers who are a bit long in tooth they can work together to pool estimates on work they have performed previously.

But much of the work of an experience engineering team is work they have never done before. Experience engineers tend to see everything though the lens of previous experience. The result is that effort estimates are inaccurate as they have mistaken a novel task for an nostalgic task. I can’t count the number times I have said, “I thought the problem was X and it would take Y story points, but the problem is really Z and I’m still doing the research so your guess is as good as mine.”

The fact that for novel work your guess is as good is mine is why startups of inexperience engineers succeed with problems that mature companies fail at. The boss says “This problem will take too long to get to market. Let’s just not do it. It’s a waste of time.” The boss also says, “Hey brilliant engineers, you didn’t deliver this product on time! You suck! You’re fired.” Both judgements are typical of mature companies where value has to be delivered every quarter and experimental failure damage previous reputations.

In a typical tech startup, or any kind of new business, if you did the estimates you would have never started down the path. But startups don’t care! They are labors of vision and love usually staffed by people without experience who don’t know better. A good startup certainly doesn’t worry about effort estimates or punish engineers for not being able to tell time.

My advice to any engineering team that needs to worry about time is as follows:

One

You need a mix of experienced and inexperienced engineers on the team. This doesn’t mean old and young as much as people who have done it before and people who have not. Mix your insiders with your outsiders. For example if you’re building a web app bring in a a few mobile devs to the sprint planning. And some interns. And listen to the outsiders. Engage in a real discussion.

Two

If someone in charge wants to know how long a novel task will take from just the title of the task, without any true discussion, walk away. You’re going to give them a wrong answer. By the way good estimates are rarely rewarded–they are expected! But bad estimates are almost always punished. An honest “I don’t know” is always better than “2-3 weeks” or “2-3 story points”.

Three

Remember there is no value in hitting the deadline without quality, performance, or value to the user. In fact I’m always a little suspicious of teams that never miss a deadline. Apps that crash or cutting scope to the point of no visible progress is a hallmark of teams that hit their deadlines. I’m not saying don’t work hard or try to hit your deadline just be tough about the result at the end of the schedule: Give it more time if needed!

Four

The problem with my advice is that everyone wants to know the schedule. So many other functions depend on the schedule. Releasing a product on time is critical to the business. So my final piece of advice, and you’re not going to like it, is let the business set the deadline. Instead of wasting everyone’s time upfront with a broken planning process to arrive at a deadline, get the deadline first and work backward with effort estimates. While time is limited the amount of time we spend on on a task is flexible. We work differently if we have 3 months, 6 months, or 12 months to accomplish a task. Ask any college kid how much time they put into their studies at the end beginning of the semester when time seems unlimited vs the end of the semester when time is in short supply.

Time is always in short supply.

Trolls Are USA

It’s clear that Americans are more divided than ever. Our self-segregating tendencies have been reinforced by the adoption of Internet technologies and algorithms that personalize our newsfeeds to the point that we walk side-by-side down the same streets in different mental worlds.

Before the web, before iPhone, Netflix, and Facebook, the physical limits of radio, television, and print technology meant that we had to share. We had to share the airwaves and primetime and the headlines because they were limited resources.

In the pre-Internet world print was the cheapest communication to scale and thus the most variable. Anyone with a few hundred bucks could print a newsletter but these self-published efforts were clearly inferior to the major newspapers. You could tell yellow journalism from Pulitzer winners just by the look of the typography and feel of the paper in your hands. This was true with books and magazines as well. Quality of information was for the most part synonymous with quality of production.

To put on a radio or TV show you had to be licensed and you needed equipment and technical skills from unionized labor. Broadcast was more resource intensive and thus more controlled than print and thus more trusted. In 1938 The War of Worlds radio drama fooled otherwise skeptical Americans into believing they were under attack by Martian invaders. The audience was fooled because the show was presented not as a radio play but a series of news bulletins breaking into otherwise regularly scheduled programming.

The Broadcast technologies of the pre-social media world coerced us into consensus. We had to share them because they were mass media, one-to-many communications where the line between audience and broadcaster was clear and seldom crossed.

Then came the public Internet and the World Wide Web of decentralized distribution. Then came super computers in our pockets with fully equipped media studios in our hands. Then came user generated content, blogging, and tweeting such that there were as many authors as there were audience members. Here the troll was born.

Before the Internet the closest we got to trolling was the prank phone call. I used to get so many prank phone calls as high schooler in the 1970s that I simply answered the phone with a prank: “FBI HQ, Agent Smith speaking, how may I direct your call?” Makes me crack up to this day!

If you want to blame some modern phenomenon for the results of the 2016 presidential election, and not the people who didn’t vote, or the flawed candidates, or the FBI shenanigans, then blame the trolls. You might think of the typical troll as a pimply-faced kid in his bedroom with the door locked and the window shades taped shut but those guys are angels compared to the real trolls: the general public. You and me.

Every time you share a link to a news article you didn’t read (which is something like 75% of the time), every time you like a post without critically thinking about it (which is almost always), and every time you rant in anger or in anxiety in your social media of choice you are the troll.

I can see that a few of my favorite journalists and Facebook friends want to blame our divided culture, the spread of misinformation, and the outcome of the election on Facebook. But that’s like blaming the laws of thermal dynamics for a flood or the laws of motion for a car crash. Facebook, and social media in general, was the avenue of communication not the cause. In technology terms, human society is a network of nodes (people) and Facebook, Google, and Twitter are applications that provide easy distribution of information from node to node. The agents that cause info to flow between the social network nodes are human beings not algorithms.

It’s hard not to be an inadvertent troll. I don’t have the time to read and research every article that a friend has shared with me. I don’t have the expertise to fact-check and debunk claims outside of my area of expertise. Even when I do share an article about a topic I deeply understand, it’s usually to get a second opinion.

From a tech perspective, there are a few things Facebook, Google, and Twitter can do to keep us from trolling each other. Actually, Google is already doing most of these things with their Page Rank algorithms and quality scores for search results. Google even hires human beings to test and verify the results of their search results. Thus, it’s really hard for us to troll each other with phony web pages claiming to be about cats when dogs are the topic. Kudos to Google!

The following advice is for Facebook and Twitter from admiring fan…

First, hire human editors. You’re a private company not a public utility. You can’t be neutral, you are not neutral, so stop pretending to be neutral. I don’t care which side you pick, just pick a side, hire some college educated, highly opinionated journalists, and edit our news feeds.

Second, give us a “dislike” button and along with it “true” and “false” buttons. “Like” or “retweet” are not the only legitimate responses that human beings have to news. I like the angry face and the wow face but those actions are feelings and thus difficult to interpret clearly in argumentation and discourse. Dislike, true, and false would create strong signals that could help drive me and my friends to true consensus through real conversations.

Third, give us a mix of news that you predict we would like and not like. Give us both sides or all sides. And use forensic algorithms to weed out obvious trash like fake news sites, hate groups with nice names, and teenagers pretending to be celebrities.

A/B test these three ideas, and better ones, and see what happens. My bet is social media will be a healthier place but a small place with less traffic driven by the need to abuse each other.

We’ll still try to troll the hell out of each other but it will be more time consuming. Trolling is part of human nature and so is being lazy. So just make it a little harder to troll.

Before social media our personal trolling was limited to the dinner table or the locker room. Now our trolling knows no bounds because physical limits don’t apply on the Internet. We need limits, like spending limits on credit cards, before we troll ourselves to death.

Notes on NSUserPreferences

You can set and get NSUserPreferences from any view controller and the app delegate to they are a great way to pass data around the various parts of your iOS App.

Note: NSUserPreferences don’t cross the iOS/watchOS boundry. iOS and watchOS apps each have their own set of NSUserPreferences.

In the example below you have a class Bool property that you want to track between user sessions.

In the code above…
– The var showAll is the data model for a switch object value
– The string savedShowAll is the key for the stored value
– Use NSUserDefaults.standardUserDefaults().objectForKey() to access a stored value
– Use the if let idiom as the stored value might not exist
– Use NSUserDefaults.standardUserDefaults().setObject() to save the value
– Apparently setObject() never fails! 😀

Faceless Phone

About twelve years ago I attended a management leadership training offsite and received a heavy glass souvenir. When I got home after the event I put that thingamabob, which officially is called a “tombstone”, up on a shelf above my desk. Little did I know that after more than a decade of inert inactivity that souvenir would launch me into the far future of the Internet of Things with an unexpected thud.

Last night before bed I set my iPhone 6 Plus down on my desk and plugged it in for charging. Then I reach up to the shelf above to get something for my son and BANG! The tombstone leapt off the shelf and landed on my desk. It promptly broke in half and smashed the screen of my iPhone. In retrospect I see now that storing heavy objects above one’s desk is baiting fate and every so often fate takes the bait.

I’ve seen many people running around the streets of Manhattan with cracked screens. My screen was not just cracked. It was, as the kids say, a crime scene. I knew that procrastination was not an option. This phone’s face was in ruins and I needed to get it fixed immediately.

No problem! There are several wonderful Apple Stores near me and I might even have the phone covered under Apple Care. Wait! There was a problem! I had several appointments in the morning and I wasn’t getting to any Apple Stores until late afternoon.

Why was this a big deal? Have you tried to navigate the modern world without your smart phone lately? No music, no maps, no text messages! Off the grid doesn’t begin to cover it! My faceless phone was about to subject me to hours of isolation, boredom, and disorientation!

Yes, I know, a definitive first world problem. Heck! I lived a good 20 years before smart phones became a thing. I could handle a few hours without podcasts, Facebook posts, and Pokemon Go.

In the morning I girded my loins, which is what one does when one’s iPhone is smashed. I strapped on my Apple Watch and sat down at my desk for a few hours of work-related phone calls, emails, and chat messages.

Much to my surprise even though I could not directly access my phone almost all of it features and services were available. While the phone sat on my desk with a busted screen its inner workings were working just fine. I could make calls and text messages with my watch, with my iMac, and with voice commands. I didn’t have to touch my phone to use it! I could even play music via the watch and listen via bluetooth headphones. I was not cut off from the world!

(Why do these smart phones have screens anyway?)

Around lunch time I had to drive to an appointment and I took the faceless phone with me. I don’t have Apple Carplay but my iPhone synch up fine with my Toyota’s entertainment system. Since I don’t look at my phone while driving the cracked screen was not an issue. It just never dawned on me before today that I don’t have to touch the phone to use it.

I imagine that our next paradigm shift will be like faceless phones embedded everywhere. You’ll have CPUs and cloud access in your wrist watch, easy chair, eye glasses, and shoes. You’ll have CPUs and cloud access in your home, car, office, diner, and shopping mall. You’ll get text messages, snap pictures, reserve dinner tables, and check your calendar without looking at a screen.

Now, we’re not quite there yet. I couldn’t use all the apps on my phone without touching them. In fact I could only use the a limited set of the built-in apps and operating system features that Apple provides. I had to due without listening to my audiobook on Audible and I couldn’t catch any Pokemon. Siri and Apple Watch can’t handle those third party app tasks yet.

But we’re close. This means the recent slow down in smart phone sales isn’t the herald of hard tech times. Its just the calm before the gathering storm of the next computer revolution. This time the computer in your pocket will move to the clouds. Apple will be a services company! (Google, Facebook, and Amazon too!) Tech giants will become jewelry, clothing, automobile, and housing companies.

Why will companies like Apple have to stop making phones and start making mundane consumer goods like cufflinks and television sets to shift us into the Internet of Things?

Because smooth, flawless integration will be the new UX. Today user experience is all about a well designed screen. In the IoT world, which I briefly and unexpectedly visited today, there won’t be any user interface to see. Instead the UX will be embedded in the objects we touch, use, and walk through.

There will still be some screens. Just as today we still have desktop computers for those jobs that voice control, eye rotations, and gestures can’t easily do. But the majority of consumers will use apps without icons, listen to playlists without apps, and watch videos without websites.

In the end I did get my iPhone fixed. But I’m going to keep visiting the IoT future now that I know how to find it.

On the Naming of Functions

A thoughtful coder once said that “it’s more important to have well organized code than any code at all.” Actually several leading coders have said this. So I’ll append my name to the end of that long linked list.

I’m trying to develop my own system for naming functions such that it’s relatively obvious what those functions do in a general sense. Apple, Google, Microsoft and more all have conventions and rules for naming functions. Apple’s conventions are the ones I know the best. For some reason Apple finds the word “get” unpleasing while “set” is unavoidable. So you’ll never see getTitle() as an Apple function name but you will see setTitle(). This feels a little odd to me as title() could be used to set or get a title but getTitle clearly does one job only. I know that title() without an argument can’t set anything but I’m ok with the “set” all the same.

So far I’m testing out the following function naming conventions:

  • calcNoun(): dynamically calculates a noun based on the current state of internal properties
  • cleanNoun(): returns a junk-free normalized version of a noun
  • clearNoun(): removes any data from a noun and returns it to its original state
  • createNoun(): statically synthesizes a noun from nothing
  • updateNoun(): updates the data that a noun contains based on the current state of internal properties
  • getNoun(): dynamically gets a noun from an external source like a web server

As you can see I like verbs in front of my nouns. In my little world functions are actions while properties are nouns.

calcNoun(), createNoun(), and getNoun() are all means of generating an object and with a semantic signal about the process of generation.

cleanNoun() returns a scrubbed version of an object as a value. This is really best for Strings and Numbers which tend to accumulate whitespace and other gunk from the Internet and user input.

clearNoun() and updateNoun() are both means for populating the data that an object contains that signal the end state of the updating process. (Maybe I should have one update function and pass in “clear” data but many times clearing is substantially different from updating.)

I hope this helps my code stay organized without wasting my time trying to map the purpose of a function to my verb-noun conventions!