Categories
Nerd Fun Programming

C Plus Minus

While consuming Handmade Hero and coding furiously to keep up with Casy Muratori I discovered the joy of programming in a language that I deeply understand. This is not one of those new trendy programming languages that tries to be type-safe without explicit types or functional without being confusing. And yet all the new hot/cool programming languages are based on this ur-language. Swift, TypeScript, Go, C++14, and Java 8 are all “c-like” languages and the original “c-like” language is a lingo that we used to call C+- (C Plus Minus).

I probably like C because it was the first non-toy programming language that I used to program a real personal computer. In the late 1980s all the home computers came with BASIC (which is best SHOUTED in CAPS). But once I got a true personal computer, a Macintosh 512Ke, that could run real applications I had to buy a real programming language to write those real applications. For a couple of months that real language was Pascal… but C rapidly took over. By the time I got to Apple in the early 1990s C++ was about to push C out of the way as the hot new programmer’s tool.

We have this same problem today. There is always another more productive, safer, more readable programming language around the corner. If you code on the backend for a living you’re probably thinking about Go or Rust. If you code on the front side you’re ditiching CoffeeScript for TypeScript or just sticking with JavaScript until the next version, ECMA Script 6, shows up in your minimum target browser.

But I’ve been traveling back in time and happily coding away with access to pointers and pointer arithmetic, pound defines, and user designed types. It’s not plain vanilla C because like Cory, I’m compiling my code with a modern C++ compiler. I’m just not using 90% of C++’s features. Back in the 1980/90s we call this language C+-. Back then only some of the C++ standard had been implemented in our compliers. We had classes but not multiple inheritance. (Later we learned that multiple inheritance was bad or at least poor taste so not having access to it was ok.) We only had public and private members. (Protected members aren’t actually useful unless you’re working on a big team or writing a framework. We were writing small apps in small teams.) We had to allocate memory on the heap and dispose of it. So we allocated most of what we needed up front and sub-allocated it. We didn’t have garbage collection, we didn’t even know about garbage collection, so we couldn’t feel bad. We felt powerful.

Now that I’ve been writing in C+- for a few weeks I feel like Superman–Or maybe Batman–Your pick. I have just a few tools in my tool belt but I know how to use them. In the modern world Swift 3.o is thinking of getting rid of the ++ operator and the for(;;){} loop. I use those language features every day, usually together: for(i = 0; i < count; i++) {}. I am told these things are ugly. They seem like familiar old friends to me!

One thing I really like is that I can access a value and increment a pointer with one pretty little expression: *pointer++. I like thinking in bytes and bits and memory addresses. And I like how fast my little programs run and how small their file sizes are.

I know I should not like all these things. Raw access to memory is dangerous. &-ing and |-ing bits is probably dangerous too. My state is not safely closured and side-effects abound. But modern C++ compilers and tools like GCC and Clang do a pretty good job of catching memory access errors these days. It was much more dangerous back in 1986 back when I first started.

Maybe I’m just nostalgic. But while you are learning Swift or TypeScript to write web and mobile apps the operating system your computer runs (Mac OS X, Windows, Linux) was written in C+-. The web browser (Safari, Firefox, or Chrome) that renders your HTML, CSS, and JS was written in C+-. That awesome AAA game and Node.JS were written in C+-. (Some parts C, some parts C++ and some parts Assembly as needed.)

C+- is the Fight Club of computer languages: Nobody talks about it, it doesn’t have official status, and groups of self organizing coders beat each other up with it every day.

Categories
Nerd Fun Programming Self Improvement

Binge Watching Handmade Hero

For the last several weeks I’ve been obsessed with one TV show. It’s changed my viewing habits, my buying habits, and my computing habits. Technically it’s not even a “TV show” (if your definition of that term doesn’t include content created by non-professionals that is only available for free over the Internet).

But for me, a more or less typical Gen-Xer, Handmade Hero by game tool developer Casey Muratori has me totally enthralled as only must see TV can enthrall. I’m hooked and I simply must watch all 256+ episodes of Handmade Hero before I die (in about 1,406 Saturdays according to the How Many Saturdays app).

So first off let me explain a few things. Unless you are an aspiring retro game programmer or aging C/C++ programmer Handmade Hero will seem tedious at best and irrelevant at worst. There are much better and more modern ways to make a video game (like SpriteKit on iOS or Unity on any OS) but Casey promises to demonstrate live on Twitch.TV how to write a complete video game from scratch, without modern frameworks, that will run on almost anything with a CPU. He’s starting with Windows but promises Mac OS X, Linux, and Raspberry Pi.

This is a bold promise! When I first heard of Handmade hero, almost 2 years ago I ignored it. I didn’t know who Casey Muratori was and the Internet is littered with hundreds of these solo projects that tend to fissile out like ignobly failed Kickstarter projects.

But a comment in Hacker News caught my eye about a month ago. Casey had delivered hundreds of hours of live coding with explanations of arcane C, Windows, and video programming techniques! It’s all archived on YouTube and he’s still steaming almost every night! Awesomesause!

So I had to check it out. I started with Casey’s first video, Intro to C on Windows, and ate it up. I had to pound through the rest of that week’s archive. Because I have a family and a very demand job and kids and cats I had to purchase a subscription to YouTube Red so that I could watch Casey’s videos on or offline. Google is getting $10 bucks a month off me of because of Casey!

My keyboarding fingers ached to follow along coding as Casey coded. I used to be a C/C++ programmer. I used to do pointer arithmetic and #DEFINEs and even Win32 development! Could I too write a video game from scratch with no frameworks? I had to buy a Windows laptop and find out! Thus Dell got me to buy a refurbished XPS 13 because of Casey!

Even Microsoft benefited. I subscribed to Office 365 for OneDrive so I could easily backup my files and use the Office apps since I’m keeping my MacBook Pro at the office these days. I have discovered that a Windows PC does almost everything a MacBook does because of Casey!

I usually have less than an hour a day to watch TV so I’ve had to optimize my entertainment and computing environment around Handmade hero because at this rate I will never catch up to the live stream! But I’m having a blast and learning deep insights from a journeyman coder.

What could an old school game coder teach an old battle-scared industry vet like me? More than I could have imagined.

First of call Casey is an opinionated software developer with a narrow focus and an idiosyncratic coding style. He is not wasting his time following the endless trends of modern coding. He is not worried about which new JavaScript dialect he is going master this month or which new isometric web framework he is going wrestle with. He codes in C with some C++ extensions, he uses Emacs as his editor, he builds with batch files, and debugs with VisualStudio. While these tools have changed over the years Casey has not. He is nothing if not focused.

Thus Casey is a master of extemporaneous coding while explaining–the kind that every software engineer fears during Google and Facebook interviews. This means Casey has his coding skills down cold. He is unflappable.

Casey doesn’t know everything and his technique for searching MSDN while writing code shows how fancy IDEs with auto-completion are actually bad for us developers. He uses the Internet (and Google search) not as a crutch to copy and paste code but as a tool to dig deep into how APIs and compilers actually work. There seems to be nothing Casey can’t code himself.

Casey makes mistakes and correct himself. He writes // Notes and // TODOs in his code to follow up with as if he is working with team. Casey interacts with his audience at the end of every stream and is not shy about either dismissing their questions or embracing them. Casey is becoming a better, more knowledgeable programming before our eyes and we’re helping him while he is helping us.

Casey is not cool or suave on camera. He swigs almond milk and walks away off screen to get stuff during the stream. But nothing about Handmade Hero would be substantially improved if Casey hired a professional video production team. In point of fact, any move away from his amateur production values would be met with suspicion from his audience. Any inorganic product placement would fail. Dell, Microsoft, and Google should support him but stay the heck away least they burst the bubble of pure peer-to-peer show-and-tell that surrounds Casey.

I have 249 videos go to (and Casey has not stopped making videos)! I still don’t know if he delivers on his promise and creates an actual video game from scratch. (Please! No spoilers!) But I already know far more than I did about real-world game development where the gritty reality of incompatible file systems and operating platform nuances make Object Oriented Programming and interpreted bytecode luxuries a working developer can’t afford.

 

Categories
Nerd Fun Programming

In Defense of Bubble Sort

Bubble sort is an algorithm with a very bad reputation. Robert Sedgwick, in Algorithms in C, notes that bubble sort is “an elementary sorting method” and “it does much more work” than it needs to. Donald Knuth is much more harsh when he states “the bubble sort seems to have nothing to recommend it, except a catchy name…” in the Art of Computer Programming.

Like a an actor valued only for his good looks bubble sort is an algorithm an enterprising coder should probably not admire.

Why is bubble sort so bad? Why does decent computer society advise young and impressionable software engineers avoid it? Why am I devoting a whole blog post to the ugliest sorting algorithms?

TIL: You can learn more from the flawed than you can from the flawless.

I’m not going to tell you why bubble sort is so bad. But before you google it why not try to figure out on your own?

The truth is that modern programmers don’t usually implement any standard sorting routines on the job—even the good ones! Generally there exists a collection class or library that is well written and well tested. If you have a mentor she will tell you not to bother with your own interpretations of the standard body of algorithms. That problem as been solved.

However, I think you’re missing out. Knowing now to implement well known algorithms can help you understand when to use them, what their strengths and weaknesses are, and how to talk about them. Like in a job interview.

Bubble sort is a great place to start. In order to understand why it’s so bad you have to understand big O notation, how algorithms are classified, and how the ordering of input data impacts the performance of an algorithm.

It’s my opinion that bubble sort is only the most terribad sorting method in the platonic world of absolute randomly unordered data. Bubble sort is actually very appropriate for error detection or for data that is already mostly sorted. You know, the kind of data that you are likely to run into in real life.

// Bubble sort

func bubble(inputList:[Int]) -> [Int] {
    var l = inputList
    var t = 0
    var swapped = false
    
    repeat {
        swapped = false
        for var i = 1; i < l.count; i++ {
            if l[i-1] > l[i] {
                t = l[i-1]
                l[i-1] = l[i]
                l[i] = t
                swapped = true
            }
        }
    } while swapped
    return l

}

var results: [Int]

var randomList = [5,2,7,8,1,9,0,3,4,6]
results = bubble(randomList)

var orderedList = [0,1,2,3,4,5,6,7,8,9]
results = bubble(orderedList)

var semiorderList = [0,1,2,3,4,9,6,8,5,7]
results = bubble(semiorderList)
Categories
Programming

Identity used to sign executable no longer valid

The last thing I wanted to do on a Sunday morning is write a blog post about an an Xcode executable problem. What I had planned to do is test my most recent Swift 2.0 SpriteKit game on my iPad and iPhone. Last night I got a “Identity used to sign the executable is no longer valid” error when attempting to run my code on a real device. Since it was around midnight I took the message a notification that bedtime had arrived. Besides, a quick search on StackOverflow would surely solve the problem and if I got on SO now I would be up all night nosing around.

This morning I got the same message and found a post on SO that started  four years ago with two pages of answers: The identity used to sign the executable is no longer valid. It’s been viewed 66K times and covers many ancient versions of Xcode. The top answer simply said to restart Xcode. Indeed, restarting, rebooting, or re-installing is always a great answer! So I tried the first two (restarting Xcode and rebooting all my devices) but no joy. And it’s a cheap answer. 99% of computer problems are temporarily solved by powering down and up the server or device but the root cause sits like a malignant elf in the machine, biding it’s time, ready to strike again.

So I figured it out. My problem, in any case.

Last week I was giving a talk at SUNY Buffalo (shout out to Prof. Hartloff). It’s just far enough away from NYC that I had to stay overnight. I took a MacBook Pro that I don’t ordinarily use for development. When I was working on my new game and testing it my iPad and iPhone (to get actual frame rates and the feel of touching the screen) Xcode discovered that I didn’t have an iOS development certificate on that MacBook and asked me if I wanted to revoke my current cert or copy it over from another machine. Since I didn’t have it I said revoke. Xcode did what ever it does and created me a new iOS dev cert associated with that particular MacBook Pro.

Note to Apple: There has to be a better way for Apple certified developers to manage their certificates in this age of clouds and connectivity. Can’t these certs reside on Infinite Loop server?

Enough backstory!

If you get the dreaded “Identity used to sign executable no longer valid” error and restarting your Xcode doesn’t work here are the steps that should fix it for good.

Go to your Apple developer account certificate overview and read carefully and completely about how to manually manage certs and provision devices. Once you understand what you need to do it’s relatively simple.

  1. Revoke and delete all the could certs and profiles of devices you no longer own that have build up over the years. Clean it all up.
  2. Then, following the instructions from Apple recreate your iOS development and distribution certificates.
  3. Re-provision your iOS devices.
  4. Download your certs and provisioning files and reinstall them into your Mac’s keychain.
  5. Clean and build your app.
  6. Now it should run on the iOS devices you’ve provisioned nicely.

Note to You: Xcode is no longer managing your certs and profiles. But that’s OK. It was doing a bad job anyway.

Post Script

Why didn’t I post this info to Stack Overflow? Because this is a pretty radical solution, not without risk. SO, for better or worse, has been come the place for copy and paste solutions that have not aged gracefully over time. Don’t get me wrong–I love Stack Overflow, recommend it, and use it all the time. But sometimes it’s not safe to post an answer to a problem that requires reading comprehension.

Lucky for you and me, my unpopular blog post will probably be the last item in your search for solutions to apple certification problems.

Categories
Nerd Fun Programming

Four Tips for Xcode Storyboard Users


Apple’s Xcode Storyboard is both your best friend and your worst enemy when it comes to developing state-of-the-art iOS, Mac OS X, tvOS, and watchOS apps. Sometimes, what would be really hard, like associating a function with a gesture is quick and easy. Sometimes, what should be easy, like toggling a property, requires hunting down a checkbox in an inspector that only shows up with the proper object selected.

Below are three common problems with Xcode Storyboards and what works for me to resolve them. Xcode Storyboard evolves with every release: these tips work for Xcode 7.1.1.

1. Is your Storyboard rendering as XML source code and not graphics?

Somehow, someway Xcode magically switches the view of a storyboard from Interface Builder – Storyboard to Source Code. No matter, just secondary-click on the name of the storyboard in the Project Navigator and select Open As -> Interface Builder – Storyboard.

What is an Interface Builder? Back when the Mac OS X was the NextStep OS Interface Builder was a developer tool for creating views. This ancient app lives on the deep sub-basement of Xcode and sometimes unexpectedly appears.

2. Is your app blank in the simulator?

It might be that you need to select your main view controller in Main.storyboard and set the “Is Initial View Controller” checkbox in the Attributes Inspector.

This usually happens when you have deleted the default view controller on a storyboard. You know, when you want to start over.

If the default storyboard is still around you can drag the Storyboard Entry Point arrow from the original view controller to point to your main view controller.

3. Having a hard time control-dragging between UI objects and the Document Outline?

You’re not alone! You can use the Connections Inspector to drag-create connections without holding down the control key.

In the inspector just drag from the circle to the controller that you want to your UI object connected with.

Make sure you have the correct UI object on the storyboard and/or in the Document Outline selected so you connect the right things together.

4. Auto layout constraint values driving you batty?

Setting up constraints is one of the most unintuitive parts of Xcode’s storyboards. Part of the problem is there a several ways to do it and Xcode doesn’t always seem to do what you ask it to do. Don’t worry! You can use the Document Outline to selection each individual constraint and adjust it’s values in the Size Inspector.

Generally I use the Align or Pin menus to initially set the auto layout constrains for an UI object.

Then I use the Resolve Auto Layout Issues menu to make the UI object conform to the initial constraint values with Update Frames.

Finally, since the UI object always looks weird I select each constraint in the Document Outline, adjust it in the Size Inspect, or delete it and start over.

There you go Xcoders! If I thinking of anything else I’ll update this post!

Categories
Nerd Fun Programming

Fun with Core Graphics and Swift Part 2

Hey, you have 10 minuets, don’t you?

Then you can add pinch and rotate gestures to our fake-genigraphics app. I didn’t realize it would be this easy. But sometimes Apple’s developer tools engineering team does something amazing–and gesture recognizers are super amazing.

A good way to start is to read the UIGestureRecognizer Tutorial on Ray Wenderlich’s website. I’ve said good things about Ray before. I don’t know him. However, Ray and his team of technical authors just have a way of spelling things out 100 times more clearly than Apple’s developer documentation.

After you’ve read the tutorial open up ViewController.swift in the fake-genigraphics project. Delete the boilerplate code in the ViewController class and replace it with the following code (from the tutorial):

    @IBAction func handlePinch(recognizer : UIPinchGestureRecognizer) {
        if let view = recognizer.view {
            view.transform = CGAffineTransformScale(view.transform,
                recognizer.scale, recognizer.scale)
            recognizer.scale = 1
        }
    }
    
    @IBAction func handleRotate(recognizer : UIRotationGestureRecognizer) {
        if let view = recognizer.view {
            view.transform = CGAffineTransformRotate(view.transform, recognizer.rotation)
            recognizer.rotation = 0
        }
    }

Each of these functions is decorated with the @IBAction tag. That means we’re going to hook them up to events from our project’s Main.storyboard. This part still vexes me. I want to do everything in code and not use drag and drop to wire up events with functions (and properties with variables). Especially since I am left handed and to do a key part of the drag and drop I have to use the control key which only exists on the left side of my nifty Apple Magic Keypad.

Enough whining. Time to drag and drop!

Open up your Main.storyboard in the fake-genigraphics project. In the Object Library on the right find the section near the bottom with all the gesture recognizers. Drag a Pinch Gesture Recognizer out from the library and drop it on the BackgroundView in the storyboard. Now for the hard part: In the Document Outline on the left control-drag the Pinch Gesture Recognizer entry up and drop it on the ViewController entry. On the pop-up menu that suddenly appears choose Sent Actions Handle Pinch. (So weird).

Do these same steps again with the Rotation Gesture Recognizer. What you are doing is connecting events from the gesture recognizers to the functions in the ViewController marked with @IBAction.

Guess What? You’re done! Run your app in the simulator and on your iPhone or iPad. You can now zoom in and out and rotate the Background View like a pro. Notice how responsive and smooth the animation is. Alas there seems to be limits on how far you can zoom out. And you can’t rotate and zoom at the same time. I have trouble with rotating with my left hand. I’m convinced that nobody at Apple is left handled!

Categories
Nerd Fun Programming Uncategorized

The Secret to Swift is Enums

I’ve found the CS193P (Developing iOS 8 Apps with Swift) iTuneU class really helpful in wrapping my old Objective-C head around Apple’s new Swift programming language. Yes, I know we’re at iOS9 but the fundamentals of the class are still relevant and coaxing the code to compile in iOS 9/Swift 2.0 is a fun little last in and of itself.

The big revelation for me is how core Swift’s enums are to understanding and using the language properly. When Apple decided to improve it’s development environment it could have done so in a million different directions. Apple could have just patched up Objective-C. Apple could have moved over to Java or Scala or even JavaScript. (I was hoping for  JavaScript as Node.js and ECMAScript 6 have proved that JavaScript is an industrial strength language with a real future.)

So when I learned that Apple created a new language for iOS and Mac OS X (and now WatchOS and tvOS) application development I was intrigued and disappointed and hopeful all at the same time. Many parts of Swift seems easy and cool but some parts were down right intellectually challenging: Classes and Structs? Super fancy Enums? Optionals? Strings with out a simple function to get their length? What the heck is going on here?

Swift seemed promising. But it was also changing rapidly over the last year. I fooled around with it but when I hit optionals and weak references I realized the language was not going to sugarcoat the problems of null pointers and memory management–it was trying to make them more tractable.

Optionals were especially vexing. var amount: Double? was not a Double. It was a potential Double. Amount! had better be a Double or it would crash my program. If it was’t for Xcode’s automated static code analysis I never would have been able to put the ? and ! in the right places after the right variable names. And I didn’t like that feeling.

But then I found CS193P and Professor Paul Hergarty’s remarkable ability to explain things simply.

Enums in Swift are amazing little machines. They have little in common with C Enum other than you can use them to define collections of related constants. Enums are types with values that evaluate to themselves. But  you can associate a “raw value”, like a number or a string, or more a advanced value, like Tuples or Arrays or Dictionaries. And with advanced types come functions and closures.

Yikes. These are not your father’s Enums. As Professor Hergarty’s says, Swift Enums are types used for when you want to switch between values. And, point of fact, Swift Enums are defined like a Switch structure with case statements. Each of the cases, except with raw values, can be of different types.

And boom! That explains Optionals. When you have a value that could be nonexistent or of some type, an Optional is just an Enum that wraps up it safely so you can pass it around without crashing. ? and ! are just syntactic shortcuts for accessing the different states an Optional variable might be in. Imagining that Optionals are Enums gave me the handle my mind needed to process the whole concept.

In CS193P Professor Paul Hergarty uses an Enum to power his “Calculator Brain”. In any other programming language an Enum would be used to make the code more readable. But in Swift you can use an Enum as the glue to associate states with types with functions. I have a feeling that this Enum-driven mechanism is the cleanest way to write a Swift state machine. It’s still a little rough around the edges and verbose. But it’s a super reusable pattern good not only for the mathematical operations of a calculator but for managing the states of a web server, business application, or game.

//: Playground - noun: a place where people can play

import UIKit
import Foundation

class CalculatorBrain {
    
    enum Op: CustomStringConvertible {
        case Operand(Double)
        case BinaryOperation(String, (Double, Double) -> Double)
        
        var description: String {
            get {
                switch self {
                case .Operand(let operand):
                    return "\(operand)"
                case .BinaryOperation(let symbol, _):
                    return symbol
                }
            }
        }
    }
    
    var opStack = [Op]()
    var knownOps = [String:Op]()
    
    init() {
        func learnOp (op: Op) {
            knownOps[op.description] = op
        }
        
        learnOp(Op.BinaryOperation("*", *))
        learnOp(Op.BinaryOperation("+", +))
        learnOp(Op.BinaryOperation("-", {$1 - $0}))
        learnOp(Op.BinaryOperation("/", {$1 / $0}))
    }
    
    func evaluate() -> Double? {
        let (result, _) = evaluate(opStack)
        return result
    }
    
    func evaluate(ops: [Op]) -> (result: Double?, remainingOps: [Op]) {
        if !ops.isEmpty {
            var remainingOps = ops
            let op = remainingOps.removeLast()
            
            switch op {
            case .Operand(let operand):
                return (operand, remainingOps)
            case .BinaryOperation(_, let operation):
                let op1Evaluation = evaluate(remainingOps)
                if let operand1 = op1Evaluation.result {
                    let op2Evaluation = evaluate(op1Evaluation.remainingOps)
                    if let operand2 = op2Evaluation.result {
                        return (operation(operand1, operand2), op2Evaluation.remainingOps)
                    }
                }
            }
        }
        return (nil, ops)
    }
    
    func pushOperand(operand: Double) -> Double? {
        opStack.append(Op.Operand(operand))
        return evaluate()
    }
    
    func popOperand() -> Double? {
        if !opStack.isEmpty {
            opStack.removeLast()
        }
        return evaluate()
    }
    
    func performOperation(symbol: String) -> Double? {
        if let operation = knownOps[symbol] {
            opStack.append(operation);
        }
        return evaluate()
    }
    
    func showStack() -> String? {
        return opStack.map{ "\($0)" }.joinWithSeparator(" ")
    }
}

var cb = CalculatorBrain()

print(cb.knownOps)

cb.showStack()
cb.pushOperand(1)
cb.pushOperand(2)
cb.performOperation("+")
cb.pushOperand(3)
cb.pushOperand(4)
cb.performOperation("/")
cb.pushOperand(5)
cb.pushOperand(6)
cb.performOperation("*")
cb.pushOperand(7)
cb.pushOperand(8)
cb.performOperation("-")
cb.showStack()

You should watch all the videos in CS193P to get the full picture. But in the meantime I’ve created a simplified version of Professor Hergarty’s CalculatorBrain suitable for a Swift Playground (above). As you fool around with the code keep in mind the following points:

  • CalculatorBain is a class and when it’s instantiated it’s initializer creates a dictionary of operations that it knows about based on type. You can add more known operations by writing a little code in one place. In the sample code the operations for addition, subtraction, division, and multiplication are written very concisely using Swift’s shortcut syntax for closures with default parameter names. (This reminds me of the best features Perl.)
  • The custom type Op is an Enum that is self documenting and has values associated with either a Double (for operands) or a Dictionary (for operations indexed by name). So simple lookups via a case statement are all that is needed to process input and output a result.
  • The two Evaluate functions are recursive, calling each other until every operand or operation needed in the Ops stack is consumed. Evaluate basically does this:
    • Until the Ops stack is empty pull an item off the end of the stack.
    • If that item is an Operand return it and the remainder of the Ops stack.
    • If that item is an operation dig into the Ops stack until you can get two operands and then execute the function associated with the operation using the operands.
    • The return the result with the remaining Ops stack.
  • If I had to write this code without using recursion and without Swift’s muscular Enums, well, there would be a lot more code and it would be more “hard coded”.

The only improvement I can think of would be to declare the known operations as a literal Dictionary. But the Swift compiler isn’t ready for that and complains: “Expression was too complex to be solved in reasonable time”. I guess if Swift can’t do something swiftly it doesn’t want to do it at all!

 

Categories
Programming

Nothing Changes More Swiftly than Apple’s Swift Syntax

Look at that beautiful Apple Swift code on the right with closures and shorthand argument names! Make’s me smile every time!

I’m enjoying the Stanford University class CS193P: Developing iOS 8 Applications with Swift. It’s free on iTunes U and the instructor Paul Hegarty knows his stuff. He’s a great explainer. I like how first he writes some code in a naive way and then fools around with it, getting it to work and then reworking it like a sculptor with clay. That’s how real programmers work. We write a rough draft of the code, make sure we understand it and then “refactor” it until we’re no longer embarrassed to share it on GitHub.

But even though the class was recorded in 2014 and updated in June of 2015 it’s aging rapidly. We’re already working with Xcode 7 and iOS 9. And all the little minor changes and improvements are adding up.

One problem you might run into is with the second video: More Xcode and Swift, MVC. In this lesson Professor Hegarty wants to show how we can use a form of polymorphism to write clean code. Specifically with Swift supports something that Objective-C did not! In Swift we can have class functions with the same name but different signatures. In the example in the lesson he shows how we can write two versions of performOperation–one that takes two Doubles as arguments and another that takes only a single Double as an argument. Very cool…

Unfortunately this code no longer compiles.

There is an explanation on Stack Overflow (if you can find it). But it’s not really clear as there are a couple of solutions. Perhaps the best solution, like so much on the human side of computer programming, is a matter of taste.

Here is the code and the fix:

    // these functions won't compile with Swift 1.2 and above
    func performOperationWith(operation: (Double, Double) -> Double) {
        if operandStack.count >= 2 {
            displayValue = operation(operandStack.removeLast(), operandStack.removeLast())
            enter()
        }
    }
    
    func performOperationWith(operation: Double -> Double) {
        if operandStack.count >= 1 {
            displayValue = operation(operandStack.removeLast())
            enter()
        }
    }
    // these functions will happily compile
    func performOperationWithTwoOperands(operation: (Double, Double) -> Double) {
        if operandStack.count >= 2 {
            displayValue = operation(operandStack.removeLast(), operandStack.removeLast())
            enter()
        }
    }
    
    func performOperationWithOneOperand(operation: Double -> Double) {
        if operandStack.count >= 1 {
            displayValue = operation(operandStack.removeLast())
            enter()
        }
    }

Because Swift is smart about classes written in Objective-C that you are subclassing from, it forces those subclasses, in this case UIViewController, to obey Objective-C’s limitations. One of those limitation was not allowing polymorphism for methods with the same name but different arguments.

The code is a little uglier, because there is slightly more of it, but it smart of Apple to be as compatible with Objective-C as possible. At least for a few more years 🙂

Categories
Nerd Fun Programming

Lisp Cons Cell and List Cheat Sheet

 

A diagram of a nested linked-list and how it is structured in Lisp with Con Cells, Symbols, Pointers, and our old friends CAR and CDR.


This work is licensed under a Creative Commons Attribution 4.0 International License.

Categories
Programming

Grunt unable to locate JSDoc (wrong installation/environment)

Recently I set up Himins (a server-side node project) to properly build using Grunt. I got every tool working awesomely in my gruntfile.js except for JSDoc! Grunt give me this clue:

Running "jsdoc:dist" (jsdoc) task
>> Unable to locate jsdoc
Warning: Wrong installation/environnement Use --force to continue.
Aborted due to warnings.

I had previously installed JSDoc with NPM globally and tested it on the command line. Googling around didn’t turn up much at first. I was not going to move JSDoc to a non-standard location just for Grunt!

This blog post, The Best Document Generator for Node, gave me the clues I needed to resolve the problem!

I just had to add

jsdoc: './node_modules/.bin/jsdoc'

to my gruntfile.js!

Why Grunt doesn’t look for JSDoc there in the first place is weird to be. All it’s friends are there. The sample code for using the JSDoc Grunt plug-in doesn’t mention this issue.

You can find my full guntfile.js here.