### The Skills Poor Programmers Need

Wed, Aug 14, 2019 5-minute read

This entire post is a retort to a blog post that can be found here: https://justinmeiners.github.io/the-skills-programmers-lack/

It's a ranty retort to a blog post I actually agree with more or less.

## Self-balancing Trees of Self-balancing Trees

Does the above subtitle sound crazy? Don't do it. Unless you have too. But, you should never have too. Until you have to. And when you have to then you should do nothing except it.

You will notice the most common theme in the workplace (and perhaps in life in general) is: multi-objective optimization. Not the answer you were expecting? Neither was I. You will more commonly hear people use the buzzword "tradeoffs". But, go ahead and do a little reading on the topic and see the clear mathematical impossibilities, improbabilities, and/or sheer infinite complexities that are our reality.

In your work projects the objectives won't and don't have to align, ever. The true optimum could be complete garbage (for individual objectives), and something a developer should never actually choose to aim for.

So, why not instead make it a pattern to willingly and purposefully opt for the outcome in which a subset of the objectives are selected for in order to maximize a most subjective, irrational, yet pleasurable outcome?

For example, use some game-theory to make choices that rig the system in your favor for things like: getting a raise, getting a promotion, getting your best friend hired. Why not? Everyone else is doing it and so should you.

In other words, sometimes it's the better move to have that self-balancing tree of self-balancing trees b/c there are other things to optimize for.

## Memorize your data structures, assembly code, and hardware architectures

Everyone knows that linked lists have better performance than arrays for certain things, becuz time complexity. Unless, of course, you are working with a real-world CPU. A little thing called memory locality becomes critical in optimizing the performance of a program.

Memorizing stuff (like structures, algorithms, architectures, numbers programmers should memorize,etc.) is useful. In fact, it would be really useful to have a minimal knowledge of every single thing that has ever existed, currently exists, and will ever exist. Or better yet, to just have it all memorized.

That was an unnecessarily exaggerative statement (as I am sure you have already proudly noted to yourself) BUT it's purpose was to highlight the missing logic in assuming guaranteed perfect brain-memory and real-time recollection of everything that was learned prior to performing a software engineering task. That assumption is perhaps equally if not more greatly lacking in logic than the prior statement was bathing in exaggeration.

So let's burn that assumption with fire ...

For example, why is using a CLI faster than a GUI? Is it because electricity travels faster when using a CLI than when using a GUI? No? Correct. It is because the CLI is as fast as your brain's speed for remembering and typing the appropriate command (i.e. using the CLI can be nearly as fast as neural signals traveling across synapses). HOWEVER, a GUI eliminates the brain's potential "cache miss" (i.e. not memorizing or quickly remembering a command) by guaranteeing a way to visually perform a desired action at the cost of having to use both a mouse (gasp) and a keyboard in sequence. The GUI is always slower except when compared to those gosh dern CLI-Brain cache misses. In those cases it is equally as fast or faster (ignoring a much poorer GUI vs CLI implementation).

Do you even remember the point of that excessively long example? Neither do I. And that's the point. The purpose of the entire above paragraph is to illustrate that you won't remember everything or be able to recall everything on cue (or even correctly, but we won't even touch on that).

## Understand every line of code

Yes, do this. But, don't actually do it. The literal implementation of this piece of advice would be NP-Complete.

Imagine it: N is the Number of lines of code you have written and P is the number of lines of code a Particular line of code depends on. Now repeat that for each line of code and each line of code that those lines of code depend on, recurse until no more dependencies, and do not stop until you can recite every Intel and Nvidia doc from heart. THEN you have to walk that DAG back up to your original code and put it ALL together in your head until you have the COMPLETE picture. See? NP-Complete.

You have just done the work that no serious software developer ever does (during work hours at least). There is a reason why we have design patterns, best practices, testing, and code reviews. No one knows every single thing that a piece of code does. These practices are almost never perfect but they do not have to be. The objective we are primarily optimizing for is: making the shareholders of the company richer. A sad and pathetic reality but it is reality nonetheless. It's easier, simpler, and all around better to live in reality and work with it, not attempting to live in a fantasy and constantly trying to perform the impossible task of maintaining said fantasy.

Diligence in understanding why best practices and testing frameworks are the way they are can be very useful/insightful. But, this is not mathematics. It's barely a science and it's sort of an art. No software practice is rigorously proven, most things are anecdotal, and most practices are like children: much more beautiful in the eyes of their parents.