Modern Computing is NOT Good at Executing Programs

Sat, Apr 3, 2021 4-minute read

The Von-Neumann architecture is designed for single program execution. Turing machines are conceptually single-program designed. So, the foundations of computer science are not for thinking in terms of multiple processes which are either entirely independent, complementary, or even cooperative. Everything in modern computing is also premised on "logic".

Despite this seeming simplicity there are still no guarantees for even the simplest of systems: print(1+1) can both fail, have a cosmic ray flip a bit halfway through execution and return an incorrect result, or can be a part of a catastrophic apocalyptic scenario where the computer it is running on is wiped out of existence mid-execution. There are likely an infinite other scenarios in which it fails, so why do we feel like simple programs have more guarantees than much more complicated systems? Bayesian thinking. Simple programs have rarely failed on us for some arbitrary reason thus far, so we assume that they never do (and thus generate a subtle belief that they can't -- at least in our perceptions and actions that is). In fact, this is such an unspoken truth that this followup point is even more unspoken: distributed systems are not more frail and prone to failure than simple programs, they are actually much more robust and likely to not fail (if designed well). The key and only key here is the choice of distributed architecture design.

There is one thing that modern computing is really bad at doing effectively: running multiple copies of a single program. Or more generally, running multiple copies of multiple programs that are all interacting and productive (where "productive" is allowed to be entirely subjective but requires at least one subject to observe productivity from their perspective). In distributed architectures we do this by spinning up entirely new machinery just to run a few extra mostly independent single programs running on it. Sometimes, we get clever and use partitioning of resources via containerization or such, but it's still just that: a simple partitioning or managed sharing of resources for singular programs. Using simple computational resource sharing algorithms on simple single-resource architectures means that we are not really executing multiple programs as maximally effectively as possible. And no, I am not referring to the efficiency of a scheduling algorithm. I am referring to the fundamentals of "single-CPU, single-GPU, single-RAM, single-Bus, single-anything and then alternate which program is using it" computer/system architectures of the PAST 50+ YEARS.

Well, maybe we just need to add MORE cores? MORE GPUs? Rewrite Kubernetes in assembly?!?! ... Eh.

The point is: if distributed systems have more actual guarantees than a single program, then there are clear reasons why one would want giant and highly-complex systems like we see in biology -> they produce and guarantee behaviors that you can neither generate nor simulate with existing "singular program" architectures. Even simple protein-protein interactions within the cell (literally just molecules bouncing off of each other almost arbitrarily) goes beyond the scope of anything a (super-)computer can do in anywhere close to the same amount of time.

The key here is architecture. We have built CPUs that process binary logic (specific binary logical operations, btw) faster than any other entity in the entire known Universe ... But, that is a singular highly-optimized architecture. Biology seems to generate architecture variations almost as a default. I would not be surprised if a tiny "biological CPU" and "viral GPU" species exists somewhere in the Universe. Tiny little cells/proteins forming tiny little interconnected logic gates? Come on, that one is easy compared to the other stuff we see in biology like tiny shrimps being capable of generating plasma, etc.

If distributed systems have more actual guarantees than a single program, then there are clear reasons why one would want giant and highly-complex systems like we see in biology -> they produce and guarantee behaviors that you can neither generate nor simulate with existing "singular program" architectures.

We really want to get better at building computational architectures that can execute many different (kinds of) programs in both an isolated and cooperative fashion in highly-performant and robust ways.

The moment we give up on everything having to be only "logical" is the moment we free ourselves into the next infinity.


If you are left asking something like: "ok, so what?" The answer is: artifical life. That is what.