Gabriel L. Helman Gabriel L. Helman

Pascal

Niklaus Wirth has passed away! A true giant of the field, he’s already missed. I don’t have much to add to the other obituaries and remembrances out there, but: perhaps an anecdote.

Like a lot of Gen-X programmers, Pascal was the first “real” programming language I learned in school. (I’m probably one of the youngest people out there to have gotten all three “classic” educational programming languages in school: Pascal, Logo, and BASIC). This was at the tail end of Pascal’s reign as a learning language, and past the point where it was considered a legitimate language for real work. (I missed the heyday of Turbo Pascal and the Mac Classic.). Mostly, we looked forward to learning C.

Flash forward a bit. Early in my career, I got a job as a Delphi programmer. Delphi was the pokeman-style final evolution of Borland’s Turbo Pascal, incorporating the Object Pascal Wirth worked out with Apple for the early Mac, along with Borland’s UI design tools. But under the object-oriented extensions and forms, it was “just” Pascal.

The job in question was actually half-and-half Delphi and C. The product had two parts: a Windows desktop app, and an attached device that ran the C programs on an embedded microcontroller.

For years, whenever this job came up in interviews or what have you, I always focused on the C part of the work. Mostly this was practical—focus on the language thats still a going concern, move past the dead one with a joke without dwelling on it. But also, I had a couple of solid interview stories based on the wacky behavior of C with semi-custom compilers on highly-constrained embedded hardware.

I don’t have any good stories about the Delphi part of the job. Because it was easy.

I probably wrote as much code in Delphi as C in that job, and in all honesty, the Delphi code was doing harder, more complex stuff. But the C parts were hard; I probably spent three times as much time to write the same number of lines of code.

Delphi, on the other hand, was probably the nicest programming environment I’ve ever used. It was a joy to use everyday, and no matter how goofy a feature I was trying to add, the language always made it possible. Some of that was Borland’s special sauce, but mostly that was Wirth’s Pascal, getting the job done.

These days, I’m not sure you could convince me to go back to C, but I’d go back to Delphi in a hot second. Thanks, Professor Wirth. You made younger me’s life much easier, and I appreciate it.

Read More
Gabriel L. Helman Gabriel L. Helman

Favorite Programming Language Features: Swift’s Exception handling with Optionals

I spent the early 20-teens writing mostly Java (or at least JVM-ecosystem code) backend code, and then spent the back half of the teens writing mostly Swift on iOS. (Before that? .NET, mostly.) I seem to be on a cycle of changing ecosystems every half decade, because I don’t write a tremendous amount of code these days, but when I do, I’m mostly back writing Java. It’s been a strange experience being back. While Java has moved forward somewhat in recent times, it’s still fundamentally the same language it’s been since the late 90s, and so going from Swift back has meant suddenly losing two decades of programming language development.

Let me give you an example of what I’m talking about, and at the same time, let me tell you about maybe my favorite language design feature of all time.

Let’s talk about Exceptions.

For everyone playing the home game, Exceptions are a programming language feature whereby if a section of code—a function or method or whatnot—gets into trouble, it can throw an exception, which then travels up the call stack, until an Exception Handler catches it and deals with the problem.

The idea is that instead of having to return error codes, or magic numbers, or take a pointer to an error construct, or something along those lines, instead throw an exception, and then you can put all your error handling code in one place where it makes sense, without having to tangle up the main flow of the program with error condition checking.

They’re pretty great! There was a lot of grumbling I remember from around the turn of the century about if exceptions were just GoTos wearing groucho marx glasses, but in practice they turned out to be a deeply useful construct.

Like many great ideas, exceptions came out of the LISP world , and then knocked around the fringes of the computer science world for a few decades. I remember them being talked about in C++ in the mid-90s, but I don’t ever recall actually seeing one in live code.

The first mainstream place I saw them was in Java, when that burst into the scene in the late 90s.

(I could do that exact same setup for garbage collection, too, come to think of it. I’m deeply jealous of the folks who got to actually use LISPs or Smalltalk or the like instead of grinding through fixing another null pointer error in bare C code.)

And Java introduced a whole new concept: Checked Exceptions. Checked exceptions were part of Java’s overall very strong typing attitude: if a method was going to throw an exception, it had to declare that as part of the method signature, and then any place that method was called had to either explicitly catch that exception or explicitly pass it on up the stack.

Like a lot of things in early Java, this sounded great on paper, but in practice was hugely annoying. Because the problem is, a lof of the time, you just didn’t know! So you had to either catch and hope you dealt with it right, or clog up your whole stack with references to possible errors from something way down lower, which kinda defeated the whole purpose? This got extra hairy if things threw more than one kind of exception, because you had to deal with each one separately, and so you end up with a lot of copy-and-pasted handlers, and the scopes around the try-catch system are always in the way. And, even if you did carefully check and catch each possible exception, Java also includes RuntimeExceptions, which are not checked, so any method in a library you depend on can throw one without you knowing about it.

So in practice, a lot of programmers ended up just using Runtime Exceptions, and then that lead to a lot more other programmers handling exceptions “Pokeman Style” (“Gotta catch ‘em all!) and just catching everything without much in the way of handling.

It’s a perfect example of a safety feature being annoying enough that it actually makes the whole thing less safe because of the work people do to avoid it instead of use it.

So when Microsoft hired “the Delphi Guy” to do a legally distinct do-over of Java in the form of C#, the result was a langage with only un-checked exceptions. You could catch them if you wanted to, and if you knew what to do? Otherwise it would run on up the call stack and end up in some global error logger. This is model most other languages from the era used.

And this also kinda sucked, becase even if you didn’t really care what the error was, you still wanted to know something happened! So you ended up writing a lot of code where in tesiting you discovered some exception being thrown from some library and messing up the whole stack you didn’t even know could happen.

Because here’s the thing—most of the time you don’t care what the details of the error were, you just want to know if the whatever it was worked. Call to a web service, number format conversion, whatever, we don’t care why it failed, necessarily, but we sure like to know about it if it did.

And so we come to Swift. Swift is one of those languages that was seemingly built by looking at how every other language handled something and then combining all the best answers. (Personally, I enjoyed tremendously that the people making decisions clearly had all the same taste I did.)

This caused quite a stir when it happened, but Swift reintroduced checked exceptions, but with a twist. No longer did you have to say which exceptions you were throwing, a method either declared it threw exceptions or it didn’t. A method that did not declare that it threw couldn’t throw any sort of exception, runtime or otherwise.

Swift has a lot of features that are designed to make the code easier to read and think about, and but not necessarily easier to write; not syntactic suger, syntactic taco sauce, maybe? One of these is that you have to type the keyword try in front of any method call that says it can throw an exception. This really doesn’t have any purpose other than reminding the programmer “hey, you have to do something here.”

And this is great, because you get some very cool options. Since individual exceptions are not checked, you can opt-in to handling individual exception types or just the fact there was an exception at all. This dovetails great with the fact that in Swift, exceptions are enumerated types instead of classes, which is a whole article on its own about why that’s also brilliant, but for our current purposes it makes it very simple to go from handling “an error” to handling the specific error type.

But! There’s an even better option for most cases, because Swift also has excellent handling for Optionals. Optionals are “just” a reference that can be null, with some excellent extra syntactic support. Now, in Java, any referece can be null, but there really isn’t much in the way of specific support for dealing with null values, so Java code gets filled with line after line of checking to see if something has a value or not.

Swift does a couple great things here. For starters, any reference that isn’t explicitly defined as an optional can never be null, so you don’t have to worry about it. But there’s also a bunch of really easy syntax to look at an Optional, and either get the “real” value and move on, or deal with the null case. My favorite detail here is that the syntax for an optional in Swift is to put a “?” after the name of the variable, as deltaPercent?, so even glancing over a screen of code it reads like “maybe this is here?” Building on that, Swift has a guard construct that you can use to check an incoming Optional, handle the null case and exit the method, or get the real value and move on. So it’s a pretty common idiom in Swift code to see a pile of null handling at the start of a method, and then the “normal” flow after that.

Combining Optionals and the new approach to exceptions, Swift provides a syntax where you can call a method that throws and instead of explicitly handling any exceptions, just get the potential result back as an optional. If an exception case happens, you don’t need to do anything, it sets the optional to null and returns. Which is fantastic, because like we said earlier, most of the time all we really care about is “did it work?” So, the optional becomes a way to signal that, and you can use the robust Optional handling system to handle the “didn’t work” case without needing to catch an exception at all.

This also encourages what I think is a very solid design approach, which is to treat a method as having two possible kinds of return values—either the successful value or an error, and you have your choice of receiving the error as a specific exception type, or just a as null value. (And subtly underscoring a method with no return value but that can throw an exception is probably a mistake.)

Brilliant!

It’s so great, because you can swap out all the error-case handling for the no-value case, and just get on with it. Systems that enforce strict error handling do it with an almost moral tone, like you’re a bad person if you don’t explicitly handle all possible cases. It’s so nice to use a system that understands there are only so many hours in the day, and you have things to get done before the kids get home.

Exceptions-to-Optional really might be my favorite language feature. I’m missing it a lot these days.

Read More
Gabriel L. Helman Gabriel L. Helman

Premature Quote Sourcing

“Premature Optimization is the Root of all Evil.” — Donald Knuth

That’s a famous phrase! You’ve probably seen that quoted a whole bunch of times. I’ve said it a whole bunch myself. I went down a rabbit hole recently when I started noticing constructions like “usually attributed to Donald Knuth” instead of crediting the professor directly. And, what? I mean, the man said it, right? He’s still alive, this isn’t some bon mot from centuries ago. So I started digging around, and found a whole bunch of places where it was instead attributed to C. A. R. Hoare! (Hoare, of course, is famous for many things but mostly for inventing the null pointer.) What’s the deal?

Digging into the interwebs further, that’s one of those quotes that’s taken on a life of it’s own, and just kinda floats around as a free radical. The kind of line that shows up on inspirational quote lists or tacked on the start of documents, but divorced from their context, like that time Abraham Lincoln said “The problem with internet quotes is that you cannot always depend on their accuracy.”

But, this seems very knowable! Again, we’re talking about literally living history. Digging even further, if people give a source it’s usually Knuth’s 1974 paper “Structured Programming with go to Statements.”

“Structured Programming with go to Statements” is one of those papers that gets referenced a lot but not a lot of people have read, which is too bad, because it’s a great piece of work. It’s shaped like an academic paper, but as “the kids say today”, it’s really an extended shitpost, taking the piss out of both the then-new approach of “Structured Programming”, specifically as discussed in Dijkstra’s “Go to Statement Considered Harmful”, as well as the traditionalist spaghetti-code enthusiasts. It’s several thousand words worth of “everyone needs to calm down and realize you can write good or bad code in any technique” and it’s glorious.

Knuth is fastidious about citations, sometimes to the point of parody, so it seems like we can just check that paper and see if he cites a source?

Fortunately for us, I have a copy! It’s the second entry in Knuth’s essay collection about Literate Programming, which is apparently the sort of thing I keep lying around.

In my copy, the magic phrase appears on page 28. There isn’t a citation anywhere near the line, and considering that chapter has 103 total references that take up 8 pages of endnotes, we can assume he didn’t think he was quoting anyone.

Looking at the line in context makes it clear that it’s an original line. I’ll quote the whole paragraph and the following, with apologies to Professor Knuth:

There is no doubt that the “grail” of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, about 97% of the time. Premature optimization is the root of all evil.

Yet we should not pass up our opportunities in that critical 3%. Good programmers will not be lulled into complacency by such reasoning, they will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

Clearly, it’s a zinger tossed off in the middle of his thought about putting effort in the right place. Seems obviously original to the paper in question. So, why the confused attributions? Seems simple.

With some more digging, it seems that Hoare liked to quote the line a lot, and at some point Knuth forgot it was his own lone and attributed it himself to Hoare. I can’t find a specific case of that on the web, so it may have been in a talk, but that seems to be the root cause of the tangled sourcing.

Thats kind of delightful; imagine having tossed off so many lines like that you don’t even remember which ones were yours!

This sort of feels like the kind of story where you can wrap it up by linking to Lie Bot saying “The end! No moral.”

Except. As a final thought, that warning takes a very different tone when shown in context. It seems like to gets trotted out a lot as an excuse to not do any optimization “right now”; an appeal to authority to follow the “make it work, then make it fast” approach. I’ve used it that way myself, I’m must admit, it’s always easy to argue that it’s still “premature”. But that’s not the meaning. Knuth is saying to focus your efforts, not to waste time with clever hacks that don’t do anything but make maintenance harder, to measure and really know where the problems are before you go fixing them.

Read More
Gabriel L. Helman Gabriel L. Helman

Why didn’t you just use…

It’s an embarrassment of riches in big open world video games this year. I’m still fully immersed in building bizarre monster trucks in Zelda Tears of the Kingdom, but Bethesda’s “Skyrim in spaaaace”-em-up Starfield is out.

I’ve not played it yet, so I’ve no opinion the the game itself. But I am very amused to see that as always with a large game release, the armchair architects are wondering why Bethesda has continued to use their in-house engine instead of something “off the shelf,” like Unreal.

This phenomenon isn’t restricted to games, either! I don’t have a ton of game dev experience specifically, but I do have a lot of experience with complex multi-year software projects, and every time one of those wraps up, there’s always someone that looks and what got built and asks “well, why didn’t you just use this other thing

And reader, every time, every single time, over the last two decades, the answer was always “because that didn’t exist yet when we started.”

Something that’s very hard to appreciate from the outside is how long these projects actually take. No matter how long you think something took, there was a document, or a powerpoint deck, or a whiteboard diagram, that had all the major decisions written down years before you thought they started.

Not only that, but time and success have a way of obscuring the risk profile from the start of a project. Any large software project, whatever the domain or genre, is a risky proposition, and the way to get it off the ground is to de-risk it as much as possible. Moving to new 3rd party technology is about as risky a choice as you can make, and you do that as carefully and rarely as possible.

I don’t have any insight into either Unreal or Betheda’s engine, but look. You’re starting a project that’s going to effectively be the company’s only game in years. Do you a) use your in-house system that everyone already knows that you know for a fact will be able to do what you need, or b) roll the dice on a stack of 3rd party technology. I mean, there are no sure things in life, but from a risk reduction perspective, that’s as close to a no-brainer as it gets.

At this point, it’s worth publishing my old guideline for when to take after-the-fact questions seriously:

  • “Why didn’t you use technology X?”—serious person, has thought about the tradeoffs and is curious to know what let you to make the choices you did.
  • “Why didn’t you JUST use technology X?”—fundamentally unserious person, has no concept of effort, tradeoffs, design.

Like, buddy, I if I could ”just” do that, I’d have done it. Maybe there were some considerations you aren’t aware of, and probably aren’t any of your business?

Thus what I part-jokingly call Helman’s Third Law: “no question that contains the word `just’ deserves consideration.”

Read More
Gabriel L. Helman Gabriel L. Helman

Break; considered confusing

Currently filled with joy and a deep sense of fellowship about this toot from James Thompson:

To be clear: Thompson is a long-time mac indy mac developer, author of Pcalc and the much-missed DragThing.  He is, without a doubt, a Good Programmer.

I love this toot because it’s such a great example of how we all actually learn things in this craft—we aren’t taught so much as we accrete bit of lore over time.  Everthing I took an actual class in was obsolete by the turn of the century, so instead I have a head full of bits of techniques, cool facts, “things that worked that one tine”—lore.  We can’t always remember where we picked this stuff up, and often it’s half-remembered, context-free.  It’s not funny that he was wrong, it’s amusing that he knew something that didn’t exist.  How many cool tricks do I know that don’t exist, I wonder?

Mainly this caught my eye, though, because `break` is a statement I try to avoid as much as possible.  Not that break isn’t valid—is is!—but I’ve learned the hard way that if I find myself saying “and now I’ll break out of the loop” (or, lord help me, continue,) I am absolutely about to write a horrible bug.  I actually made a bad decision about five decisions back, my flow control is all messed up, and instead of breaking I need to take a deep breath and go for a walk while I think about what the right way to approach the problem was.

This is the flip side to lore—I think we all have areas where we havn’t collected enough lore, and for whatever reason we avoid instinctually so we don’t get ourselves into trouble.

Read More