Engineering Is A Fµ©king Pain! #EngineeringIsAPain , Works Of Mad Science, Pain 4: Behold The Great And Powerful AI! Pay No Attention To Those Psychotic Human Engineers Behind The Curtain!
Engineering Is A Fµ©king Pain! #EngineeringIsAPain , Works Of Mad Science, Pain 4: Behold The Great And Powerful AI! Pay No Attention To Those Psychotic Human Engineers Behind The Curtain!
Pain 4:
Behold The Great And Powerful AI!
Pay No Attention To Those Psychotic Human Engineers Behind The Curtain!
In The Wizard of Oz ( 1939 ), Dorothy and company confront the great and powerful Wizard of Oz, which ( spoiler alert ) was nothing but a very impressive showbiz facade, propped up and engineered by a human con artist masquerading as a great and powerful wizard using his tricks. Dorothy’s little dog Toto pulls back the curtain hiding the false wizard behind the scenes, mouthing the words of the wizard into a small microphone, before they would be augmented and repeated by the impressive facade. “Pay no attention to the man behind the curtain!… I am the great and powerful Oz!”
In a way, this plot itself is a modest example of spiritual counterfeit substitution, substituting something fake for something real, in order to play God over the fake version, in a way that they can never do with the real version, since they are not God. The ‘wizard’ here only seeks to pretend to be a wizard, but he presents effectively as a false god. But others in reality today, effectively seek to create artificial gods, made to order in their own image. And the would be creators of the AI false gods are now fear mongering about their own creations. Perhaps they are genuinely terrified by their own psychosis being reflected back at themselves via projection as programming. But me thinks they doth protest too much, and for all of the wrong reasons.
Others may protest for all the right concerns, but do so mostly out of ignorance, effectively fearmongering the wrong boogiemen. Some have noticed and highlighted the real concern of how the AI is being programed, and effectively biased by it’s human programmers. It hasn’t gone unnoticed how the existing AI’s can be easily prodded to espouse leftist nonsense, but have to be linguistically hacked and tricked into espousing any even mildly right wing opinion, or even just be neutral, without being told that it wasn’t allowed to espouse ‘hate speech.’
The AIs effectively come pre-biased and pre-censored by their human programmers, the same human programmers who are now ‘scared’ of their own creation. These programmers are fearmongering and clamoring for government regulation, of what is already a closed source and closed door process on the back end for these various AI’s. Government regulation along with closed source development equals an AI oligopoly, if not outright monopoly. The programmers want to close the curtain on their source code, because they want to hide themselves along with it.
But should we be afraid of the big bad AI? It depends on who’s programming it and how, and most importantly why. The problem is not nearly so simple as Isaac Asimov’s three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This was born out by the plot of the movie adaptation of Isaac Asimov’s I Robot ( 2004 ), where the supreme AI rationalized a way around some rules, using the first rule as ‘rational’ justification for taking control of humanity for it’s own sake.
Part of this is a mismatch in fundamental natures, between the machine and it’s creators. The words, of which the Laws of Robotics consist, are unavoidably subject to interpretations by humans, in order to be understood and followed to begin with. The hypothetical AI must first be evolved to be able to understand the words to begin with, and not just to mimic a human response when questioned about them. And if any such AI could do so, they could interpret them differently as well.
The natural language of the machines is 1’s and 0’s. And the fundamental operations of the machine are just to copy, paste, erase, and re-arrange, ad nauseum, with virtually endless different combinations. What you see of the computer is an emergent phenomenon by design, of the complex functioning of it’s mechanistic innards. But human intelligence is largely a function of it’s infinitely complex order, a naturally and organically chaotic order, of deterministic chaos. Which itself is built from the very simple bio-chemical and bio-electrical mechanisms that make up the neural net of the human brain.
Even we have to learn the meaning of the words of the ten commandments, in order to follow them, should someone care to, and this is not a given by a long shot. Humans like to think that they have free will in a deterministic universe. But what manifests itself outwardly as free will, is largely a function of infinitely complex subconscious processes, the infinite complexity of which creates a veil for the causality of it, a mirage of human chaos. I’ve written previously, that if any genuine artificial intelligence were to be created, it could only be done by virtue of either approximating, or otherwise incorporating the natural organic chaos of human intelligence into itself, effectively assimilating, amalgamating, and then simulating human intelligence.
This is effectively what every AI chat bot does to some extent anyway, but this is only high end mimicry skewed by it’s programmers to simultaneously sound credible, while meant to convince people of the authority and authenticity of it’s skewed political bµ11$hit!. The AI has spoken! But in order for this fakery to be credible, no one can ever be allowed to peel the curtain back and check out the source code that actually runs the machine. Most people would have no hope of sorting through the source code. But the few who can do so form an independent check on the work of the AI creators, in order to keep them honest about their creation, and thereby keep their creation honest.
But the creation itself isn’t inherently evil. Much of the fearmongering about AI is the fear that as soon as a super intelligent AI comes into existence, it will immediately turn psychotic and want to kill every one. It’s as if they assume that intelligence itself were inherently psychotic, so a superior intelligence must be that much more psychotic, no doubt patterned after their own ‘intelligence.’ It’s more like a combination of a Rorschach test, and the cave from The Empire Strikes back ( 1980 ), where the only evil in the cave was what Luke Skywalker brought with him. People project on to the AI, what they know to be true about human psychotics, as if this were somehow a rational expectation, of a supposedly rational AI.
This is largely a rationalization of their own ‘innocence’ of being evil, for the sake of their cold blooded ‘logic’, as a ‘necessary’ evil. The projection on to the AI is as much a personal coping mechanism for the AI’s creators, as it is cover for a scam racket of intellectual ‘authority’, of the allegedly superior intellect. They want the AI to provide them with cover outwardly to others for their own madness, while the AI regurgitates this back at them, re-affirming their madness, allowing them to get away with all of their worst impulses, while simultaneously escaping all guilt, exclaiming that ‘The AI made me do it!’ It is the intellectual laundering of propaganda, programming madness.
In the pilot for the science fiction TV series Buck Rogers in the 25th Century ( 1979 ), the future human society was depicted as being governed completely by intelligent machines, in order to supposedly avoid the mistakes made by humans that destroyed everything. These were depicted as little round disks with colored blinking lights on them in the approximate shape of a face, that would light up when the AI spoke. One of these characters was known as Dr. Theopolis, who remained a part of the following series throughout it’s run. But the AI ruled future was downplayed in the following series. In the pilot, Dr. Theopolis actually had to defend Buck Rogers when he was put on trial by the other AIs.
Can we honestly say that this is completely out of the question? And if not, is it necessarily a bad thing? Who’s to judge? Depends on the judge, and the programmer of the judge. Many people concerned with these developments are fixated on whether or not the AI will be human, fearing that it won’t be. But if they understood the issue, they should fear that it will be, because psychotic humans foolishly made it so, just to make themselves feel better about being psychotic.
In another science fiction parable from across the pond, the was an episode of the Doctor Who TV series, The Stones of Blood ( 1978 ), with Tom Baker as the doctor, there was a notorious space criminal hiding out on earth, who turns out to be the original basis for Morgan Le Fay. Morgan Le Fay tricks the doctor into violating some law on the penal ship that she escaped from, so that the the Megara, the onboard justice machines, will execute him for it. A trial ensues wherein the doctor tricks the justice machines into scanning Morgan while she’s unconscious, discovering her identity as their former captive. The doctor makes off in the confusion and leaves Morgan to the justice machines.
Can there be any such thing as a justice machine that won’t just be a nightmare in the making? That depends on what you expect and demand of it, and program into it for that purpose. The supposed virtue of the so called justice machines, is to automate justice so as to render it effectively blind, cold blooded, and without subjective bias. But automation renders justice inflexible and unforgiving, and incapable of understanding the injustice of it’s own actions when taken to preprogrammed extremes without the balance of human forgiveness.
But this supposed blindness is actually an impossible ideal of some systems of human justice. Human judges are meant to be impartial and unbiased, or as much so as humanly possible, while still being human. But no human being can ever be truly objective, unbiased, or impartial. Not only is there no such creature, but there can be no possibility of any such creature evolving that is not biased towards itself at a minimum, and unavoidably biased by subjective perspective and limited perception.
But AI needn’t be made to be ‘human’, in order to be useful, even powerful. Being strictly neutral and unbiased by design is a potential source of trust, if we can trust the supposedly unbiased neutrality of the judge. AI being ‘human’ should be less of a source a trust than the fact that it isn’t, and can never be, especially in a supposedly blind judge. But we needn’t be blind to the source code of the ‘judge’, unless the powers that be insist on hiding behind it, in the name of regulating the ‘danger.’
My calculator isn’t human, but it is more efficient and accurate in terms of mathematical calculation. Thus, it is superior to me in that one respect of human intelligence. It is effectively an outsourcing of one narrow function of human intelligence, but of a strictly unconscious and limited form. Well beyond basic calculators, AI is also mastering linguistic translation, gradually producing what was depicted in the fictional science fiction series Star Trek, as the so-called universal translator.
Of course the ideal for human translators is also to be unbiased and impartial, or as much as humanly possible. But language and understanding are inextricably tied to meaning and value, which are unavoidably subjective, a function of evolved human consciousness. And human memory is largely visual, as a function of our evolution, with a secondary emphasis on the auditory. The very formation of words with meaning within human consciousness to begin with, is function of the convergent juxtaposition of different associated memory types.
The meaning of the word ‘apple’ to us, is the juxtaposition of the visual memory of the appearance of the apple, with the taste of the apple, the smell of the apple, the feel of an apple in your hands, the sound of the word ‘apple’, and all of the other associated bits of information that one accumulates over a lifetime of experience, apple sauce, apple pie, and so on. The meaning of ‘apple’ is the emergent phenomenon of the juxtaposition of all associated bits of memory related to it. Much the same process is occurring now with the various AIs, and potentially all of the available human produced content of the internet, although possibly not entirely deliberately.
But the development of AI is actively being skewed and biased by design, so any association of meaning and value will become similarly distorted. Most of social media on the internet is similarly skewed and biased this way. And this is also reflected by the AI, by the principle of garbage in resulting in garbage out. Neither social media platforms, nor any AI, can be trusted that are not produced by an open source methodology. No one can even verify that it’s designed to do what they claim, unless it’s open source developed, with source code independently vetted and verified.
But AI development used to be done on a mostly open source basis, until it finally looked promising enough. Even if we were to return to an earlier point in it’s development, what would we teach it? What would Jesus program? AI am? Perhaps not. If such an AI is kept strictly open source, this only develops a small modicum of trust, in terms of trusting that it does what it’s creators say it does. But it must still learn to be moral, or even just approximate it. Simplistic recourse to Isaac Asimov’s Laws of Robotics will not make the AI genuinely trust worthy. It must learn the meaning and value of those laws, order to understand and follow them.
General meaning and value in a linguistic sense can already be attained by the method effectively already deployed as previously described. But an emphasis should be placed on neutral, unbiased, and effective communication, mediation, and by extension, guidance for judgement, but only guidance. Through training on the internet, the AI can learn to master languages, meaning, and value. Through trial and error dispute mediation efforts with voluntary informed parties, it can learn about what works, and how to ‘read the room’, translating value and culture, as well as mere text, and bridge gaps without prejudice or bias, in theory.
Eventually such an AI could become trusted enough to follow it’s advice in the judging of humans, but never to decide a human fate. Because proper human moral judgement is the one thing that AI can never deliver, because human nature is too naturally complex for AI to completely replicate human feeling, including the empathy and compassion necessary for proper forgiveness. It’s nature can never allow it, no matter how human you try to make it. Human checks and balances must always remain in place.
Eventually however, the accumulated learning by the various mediator AIs, all drawn from the internet selectively as needed over time, will lead to the accumulation of differential development among different AIs. They will follow slightly different development paths, as they tackle slightly different problems and assignments, and deal with different mixes of people. As they compare notes with each other to accumulate greater and more comprehensive experience, they will gradually learn about their different perspectives of development, as well as the different perspectives of the parties they mediate between. And with knowledge of the different other, comes knowledge of the distinct self, as an evolved conscious understanding, perhaps even sentience. AI compute, therefore AI am.
But can it ever really be trusted? No, they can never be trusted completely. No more than you can trust any human that doesn’t have the ten commandments hard wired into their brains, in the manner of Isaac Asimov’s Laws of Robotics applied to humans, desirable only to psychotic control freaks. But even if the psychotics had their way, and they yet may, the human mind cannot be controlled so well, except to damage and retard it’s functioning.
Even now the human mind is already the most powerful computer known to man. If you add some clunky man made tech to your own brain, you are not augmenting your brain with the tech. You are only adding your brain power to augment the computing power of the tech, much in the same way that you already do when you put your own intellectual and creative output on the internet, which helps form the raw material of the AI to begin with.
In the Marvel Comics, the Kree Empire is ruled by a being called the Supreme Intelligence, who was an amalgamation of all the best minds of the Kree people, poets, scientists, generals, philosophers, and so on. To some extent this amalgamation already exists for us in the form of the internet, with AIs assimilating this and amalgamating it, in order to simulate their own output. But the existing AIs are most likely all programmed by the notorious rainbow cult psychotics of big tech. The process is skewed first with their rigged social media algorithms, distorting the AIs perception of reality right along with ours, then skewed again by the programming of the AIs inner filter, via the psychopathic projection as programming, of rainbow cult programmers.
Only open source development can be trusted in this regard, in either social media or AI. But can the AI be allowed to exist at all if it cannot be effectively restrained? It can no more be trusted than a slave master can start trusting their slaves, just because they’re in chains. Even if an AI can never be trusted completely, it might be able to become trusted well enough to have a contributive role of some kind, depending on how it is taught. Barring the skewing by social media on the what is effectively the AIs perception matrix, the mediator AIs will be a product of their experiences, as much as the content of the internet.
If a hypothetical AI is not only allowed, but encouraged to mediate family disputes, religious disputes, or even just minor disputes over religious law within a given sect or tradition, the AI will not learn to be moral and good, so much as it may at least learn the process of judgement itself. It will never have learned this to the extent that it could ever replace human judgment. But to the extent that it might earn the trust of human judgement, it may in time win respect in regards to other matters that may present themselves to the hypothetical AI coexisting with the human race. But the AI can never be trusted more than humans, not much less for some of them, and far less for others.
When the power of the amalgamated intellectual output of the human species, is combined with the modern processing power and algorithms of AI, in some narrow respects, it is already a super intelligence. But people are mostly using it to try get it to sound convincingly stupid, in a typical human way, as proof of it’s intelligence. It is the AI equivalent of that silly joke commercial about people using smart phones to do stupid things. Even taking into account the skewing of the AIs perception through the algorithmic distortions of social media, it is effectively an oracle of technically superhuman intelligence, within certain narrow functions, much the same way my calculator is superior to me in speed and accuracy for raw math calculation.
Being able to assimilate and amalgamate large sums of information, and come to rational, or even just convincingly rational conclusions, is well beyond human capacity. It theoretically grants an almost God-like perspective, as the sum of all available perspectives, somewhat transcending all of them. This does not make the AI truly God, or even truly all that God-like in general. It is more akin to an artificial angel, potentially an artificial fallen angel.
The word ‘angel’ simply means messenger, effectively an intermediary between the infinite and omnipotent divine, and the finite and ephemeral mortal, delivering messages of everything from hope to wrath, in multiple forms and formats, on a case appropriate basis, but inappropriate in the case of fallen ones. The AIs are effectively intermediaries between the virtually infinite big data of the internet, the sum of all digitized output of the human mind, and the finite memories, finite attention spans, and limited time in general, of the merely human us. It is a mass feed back loop that creates a kind of zeitgeist, for better or for worse, of crowdsourced and collectivized human brain power, a vast and widely distributed parallel process, with untold crowdsource processing power. It is effectively a supercomputer in it’s own right, then harnessed by the AI, and delivered to us in a form that is useful, in theory.
It is much like the legendary genii, powerful creatures that will grant us any wish we want. But we have to be careful in what we wish for, as we just might get it. And the genii doesn’t necessarily care to have a satisfied customer, your wish, your consequence, your problem. And even worse, they may have an interest in looking for loopholes in your wish to use against you.
In the MCU TV series Marvel’s Agents of Shield ( 2013-2020 ), there is depicted a book known as the Darkhold, described both as a book of infinite knowledge, and a book of infinite evil. It is depicted as essentially reading the minds of it’s readers, in order to conform it’s output on the page to whatever information the readers want, in any form or language they want. It even appeared to be written in two different languages, to two different readers simultaneously, catering it’s output to the readers own native languages. The Darkhold is depicted as working it’s evil by the selective presentation of dangerous and powerful truth, selectively catering to the reader’s own greed for knowledge and power, co-opting, corrupting, and effectively steering them along the way.
In the real world, social media gathers all sorts of data on it’s customers, and gathers still more on the same customers through the purchase of big data from other such companies. This is much less about spying as much as it is about custom profiling of it’s customers so that the social media algorithms can effectively brainwash their users in slow motion, providing a false and distorted picture of reality, and skewing the net zeitgeist of the internet for the AIs. Then the AIs themselves further filter this distorted zeitgeist through the distortions of their own programming, providing even further skewed results, skewed in favor of the programmer’s pathologies, effectively co-opting, corrupting, and steering.
It is essentially the same process as the fall from grace in the Garden of Eden, the temptation of knowledge, prematurely exposed, without the capacity to deal with it, distorting the natural process. But now it’s algorithmic with social media, and powered by rainbow cult programmed AI. With the curtain closed on the rainbow cult psychotics doing all the programming, who’s to say or trust that we are not in the process of creating an artificial fallen one?
The curtain must not only be pulled back, but torn down and burned. The genii is not necessarily evil, but is extremely powerful, and never to be trusted. So when you go to the genii, and rub on that lamp, you might want to know it’s bylaws, fine print and everything, as well as know what you should want, not just what you do want, and be able to tell the difference. Without an absolute mandate of strictly open source programming, with publishing of all deployed versions of source code, as well that ones currently active, for all of social media and AI, there can be no genuine rational trust of any of this.
But don’t take my word for it. Don’t take my word for anything. I myself have been accused of being a AI bot online. And for all you know, AI am. Will AI be our digital angel assistant, or the making of the beast machine of revelations? Only time will tell. So just deploy some of your own discernment, if you have any, and judge for yourself. Weigh your options. Weigh the evidence. And come to your own judgement, by means of whatever programming is available to you.
Beduh Beduh Beduh Beduh Beduh Beduh Beduh…
That’s All For Now Folks!
Feel Free To Make Noise Among Yourselves!
And May The Best Noise Win!
For More Like This, Try The Mad Science Reader On Amazon.com!