For those of you who don’t know, I’m a software engineer by trade. I make my living serving as a mediator between man and machine, translating human thought and intent into little ones and zeroes. I’ve been doing it for over a decade now (gah, now I feel old); I’ve seen a lot of source code and used a number of different technologies. There are plenty of people out there who are more experienced than I am and have a wider breadth of knowledge, sure, but I’ve been around the block enough times to recognize some of the patterns and practices that separate good code from cryptic, rotting spaghetti. Describing even just the ones I recognize could fill whole books, though, and is far more than can be meaningfully discussed in a single blog post. But even without delving into too much detail, there are still several high-level concepts that, in my opinion, one must embrace to write truly effective, reliable, and maintainable software.
One of the biggest hurdles that new software engineers have to overcome is learning how to think like a machine. As human beings, we have the fantastic ability to overlook and ignore detail. We don’t have to think about how to make that peanut butter and jelly sandwich that we’re craving, we just get the stuff and make it. Computers don’t have that luxury. A computer has to be told every tiny little step, from finding the fridge to identifying the desired ingredients to laying out the workspace, and on and on and on. If you’re lucky, you’ve at least managed to find a third party library that will handle the actual physical movements involved, otherwise now you’re stuck trying to teach the computer how to walk, too.
The devil is in the details, and software is all details. Assumptions lead to mistakes, and those mistakes can wait a long time before they rear their heads. Hell, just today I fell victim to the assumptions of the coder who came before me. He thought that it would be a good idea to fetch a customer record directly from the ID entered on his form, not realizing that it can take an ID or an email address. Finding and fixing that error cost me almost an hour. Even if you can get your program to work, making assumptions when writing code just makes it that much harder for the next person to come along and figure out what you were doing in order to fix that one-in-a-million bug. If you’re really unlucky, that person will be you in five years.
Respect Your Interfaces
Any piece of software of non-trivial complexity gets big. It just follows from the previous point: having to spell out all those tiny little details takes a lot of bits. The only real way to handle this is to break up your code into smaller chunks, the most common type being called functions, which you then snap together like so many Legos. While this is great for simplifying your code, by virtue of breaking up large tasks into smaller and more manageable ones, it can turn your code into a bunch of inscrutable black boxes if your functions don’t define strict, detailed interfaces.
Interfaces are very much like contracts; in fact, many programmers like myself call them such. When you write a function, you’re making a promise. You’re promising to everyone that will ever call it that your function will take certain inputs and yield certain outputs. But like real legal contracts, unless you nail down every last little detail, you’re going to end up with behavior that you didn’t expect. Some languages let you write better interfaces than others. A well defined contract will specify exactly what types of data are coming and going, ideally custom data types that represent exactly what you want (e.g. a geographic longitude) rather than simpler primitive types (e.g. an integer); this lets you know exactly what you’re dealing with, what it means, and what you can do with it. Worse languages just give you the name of the variable but don’t bother to detail anything regarding its type, assuming that you just know. While such languages can be easier and faster to work with in the short term, those kinds of assumptions are breeding ground for errors and will come back to bite you in the ass, repeatedly. The really bad languages? You don’t even get a name.
Next Time, Gadget…
Well, that ended up being a lot more words than I was expecting. There are a few other principles that I think are critical to developing good software, but I think I’m going to leave those for another day. It can be a real series! How about the rest of you code monkeys out there? Share some of your own words of wisdom in the comments.