Can we improve the resiliency of computer systems by mimicking more squishy and carbon-based systems?
Computer systems are fragile and inherently flaky. Many things can quickly take down a web server including hardware failures, software bugs, infrastructure misconfiguration, and unexpected spikes in user traffic. For this reason, enormous engineering effort has been invested into making complex computer systems more durable, resilient, and flexible to external environmental changes. We take for granted that these systems just seem to work until they don’t.
In contrast, our human bodies and other multicellular organisms are relatively resilient to environmental stressors. While not invulnerable, human cells can adapt to significant changes in their external and internal environment. Bugs in our genetic code? We can thank our mismatch repair proteins for catching mistakes in DNA replication and fixing them. Our immune system can continuously run “health checks” on our somatic cells and take cells “offline” if they start misbehaving. These processes can be imperfect, but issues with these homeostatic and repair pathways are considered abnormalities and not the default. As large, multicellular organisms, we wouldn’t have made it this far if every time any one of our cells died, we did as well! Homeostasis - the ability of the human body to maintain equilibrium despite external challenges - is a unique and convoluted process powered by complex centralized and decentralized messaging pathways among cells.
On mimicking biological systems
One interesting aspect of modern computer distributed systems is how they have evolved to mimic biological systems in order to improve their resiliency. For example, the use of containerization and container orchestrators loosely resembles the architecture of cellular organisms, where individual containers (or “cells”) can communicate with each other through messaging. This allows for a more flexible and scalable system, where any specific container can be safely removed and replaced without impacting the overall functioning of the system.
This focus about messaging between systems isn’t new in computer science. Alan Kay, a computer scientist and biologist, first introduced the concept of object-oriented programming (OOP) based on the idea of message passing between distinct objects, similar to how living cells communicate with each other using hormones and other signaling molecules. While most programmers now associate OOP with class inheritance, Alan’s original intent was to explore the potential of message passing between distinct objects, similar to how living cells could talk to each other using hormones and other signaling molecules, in computer systems.
The downside of copying biology
If we were to keep pushing this metaphor, we would eventually stumble upon the question: can modeling our computer systems against cellular models have negative consequences?
For example in multicellular organisms, a well-known issue will cell replication and proliferation is cancer - where the excessive proliferation of one cell line, usually due to damage to the DNA of the cell or a DNA repair pathway, allows for uncontrolled growth and drain resources from the rest of the organism. This is known as the Warburg effect in humans - where cancer cells consume an abnormal amount of glucose compared to their healthy counterparts.
Does a computational equivalent of cancer exist in computer systems?
There has been at least one case where a self-referential loop bug in some serverless code caused runaway cloud costs, although this doesn’t seem to be a common occurrence. This might be because changes to a computer program’s “DNA” (code) are intentionally designed, unlike the random mutations that can happen to a cell’s genome.
If we are copying cellular-based systems anyways, can we also mimic some more of their repair pathways?
On the other hand, can computer systems tap into the self-healing aspects of biological multicellular organisms? I mean, health checks for containers in container orchestrators are a good start at replicating this functionality. But part of the reasons we as humans don’t develop cancer every time we have a mutation (despite collecting many mutations throughout our lives) is because we have multiple layers of repair processes and immune system checks that catch issues before they can become problematic.
I’m not sure what this would look like in a computer system, but maybe we’ll see something develop in the next few years.
Obviously, there are places where this metaphor falls flat - I’ve made some simplifications in my discussion above for the sake of comparison. I’ve always wondered if the similarity between these biological and computer systems is a case of convergent evolution or direct inspiration. It wouldn’t be the first time biology have inspired the creation of computational equivalents (like neural networks). Overall, the idea of using biological systems as a model for improving the resiliency of computer systems is intriguing and I hope to see more future developments in the space!