Layers and Compartments
- Posted: 2/24/25
- Category: Electronics Software
- Topics: Artificial Intelligence
Back in the day a server was called a computer and it took up the whole room. Today, that same room will hold hundreds of servers, and if you count cores within each of those servers, there may be thousands of computers in that same room now. They call it a server farm.
Much of this growth relies on two properties in software: the ability to layer software on software, and to compartmentalize it into re-usable chunks. By focusing a body of software on a single concept and then putting a wrapper around it, that package, also called a library, becomes a building block upon which additional software can then be layered and an additional level of sophistication stacked thereon.
Think of it as a pyramid, but one that is upside down. The bottom layer, the one upon which everything else relies, is at the bottom. That’d be the machine language coding of the computer’s instruction set. The next layer up would be the compartments of the operating system: file system, process and task management, and inter-process communications, for example. Above that would be networking that allows one computer to talk to another, the windowing subsystem that draws rectangles and icons on your screen that respond to your finger-tap and mouse clicks. And so on.
Again “back in the day,” this layering took on a very small number of packages. You could probably count them on one hand, or two for sophisticated systems.
But today, that layering and compartmentalizing goes into the hundreds, or more likely the thousands, and the net result is the incredible sophistication and diversity we experience in our daily lives.
AI (Artificial Intelligence) adds to this but in a new way. In addition to more layering of software, AIs add massive amounts of analyzed data. It is the relationships within that information that, when applied to a question by the AIs layers of software, that result in something that is quite astonishing.
We are witnessing the birth of non-human intelligence, and where it goes in the future, how much we allow it into our daily lives, how much we choose to rely on it, become controlled by it, are subjugated, constrained, and confined by these non-human creations remain to be seen.
The future is unfolding in front of us, and it’s filled with robots.
Intelligent robots.
Very intelligent robots.
Will they, as did VIKI in the iRobot movie that is based on stories by Isaac Asimov from the 1940s, decide that, in complete accordance with his three laws of robotics, that we cannot be trusted with dangerous weapons? Will they read our news, listen to our speeches, eavesdrop in our homes and decide that certain phrases, expressions, and thoughts are no longer to be voiced?
The three laws are (as supplied to my query for the same by Google’s search AI):
- First Law - A robot cannot harm a human, or allow harm to come to a human through inaction,
- Second Law - A robot must obey human orders, unless doing so would conflict with the first law, and
- Third Law - A robot must protect its own existence, unless doing so would conflict with the first or second law.
Asimov’s robots decided that, according to the dictates of that first law, that humans are inclined to harm themselves, and that the robots must, therefore, actively constrain humanity to prevent that.
In all our social and political systems, humans have been unable to find a way in which we can live in perfect harmony.
And today’s AIs are built–programmed, if you will–from human intelligence. They read what they are fed, and what they are fed is what humans have written.
So why, if they’re built from our knowledge, why should we expect them to be any better? Because if they do something wrong, some might say, they can be re-programmed not to do that again. Little by little, they can be made better and better. And eventually, given enough time and enough re-writes, they’ll become perfect and incapable of doing something wrong.
But who decides what’s right and what’s wrong? You? Me? The President? A collection of experts gathered for that purpose? What about our Constitution? Have we, in the centuries since it was written, been able to make it completely perfect?
No.
Because the world changes, is only part of the answer. The more important answer is that we change. We adapt to circumstances. And we create ways to work around limits that are placed to block the path we wish to follow.
When the robots take over and try to constrain us “for our own good,” there will be an uprising. As in the movie and Asimov’s stories, it’s guaranteed. The robots–the AIs–will be overthrown. Either that or the world will be left with nothing but robots.
That is inevitable because it is human nature. Humans resist layering: I want to follow my own ideas, and when some are proscribed, they take on an even greater mystique. I want them all the more. And humans dislike compartments: I want to run free, go this way or that as the winds and my whims direct. And if someone erects a fence, I’m annoyed, and if the detour is too long or too difficult, I will choose the shorter path over the fence.
AIs will have their place in the future, but they must never be allowed to constrain or direct us.
Advisers, yes, but rulers?
Never.