Artificial intelligence (AI) will never equal yours. That’s because yours is not simply generated by biochemical wiring (neurons and synapses), but is powered by a non-material consciousness called the mind, spirit, and soul, which link to vast immaterial forces.
AI, no matter how sophisticated it seems, is mindless. It cannot truly create. It cannot develop authentic free will. It can only ape such attributes (and already does so very well).
That’s not to say AI is poses no threat. In the wrong hands, it certainly does. It is and will become more so a super-tool.
But as Professor Daniel O’Connor ably points out in his at turns fascinating book, The First and Last Deception, it will always rely on humans because it in essence it is naught more than switches, resistors, diodes, and other circuitry.
It’s like a Lego construct or a series of pipes. Cut off the flow of water—in this case, electricity—and it’s game over.
AI does not have its hands on the ultimate switch.
The great threat is in what humans perceive it to be.
In addition to its amazing potential of computation for worldly tasks, including military ones, there is the peril of spiritual deception by way of idolatry: treating an AI robot, for example, as a human, or even a “god.” Paganism revisited.
That is dangerous, and there are already glimmerings of it.
He quotes a Benedictine priest, Father Stanley Jaki, as warning us to picture a not-so-distant future “when man, as a rather inefficient computer, will take a poor second place to brilliant mechanical brains and will render homage not to God but to the Supreme Master Programmer, whoever or whatever that shall be.”
Note that a co-founder of Google is said to have the ambition (according to fellow tech guru Elon Musk) of creating a “digital god.”
Oh; really.
This is a book that should be read by everyone interested in the threat of two convergent “deceptions” forming a great one (AI and “UFOs”). We’ll stick for now with Professor O’Connor’s important and enlightening discussion of AI.
Another prominent Google engineer, Anthony Levandowski, already (in 2017) established the First Church of At=rtificial Intelligence” based on the “realization, acceptance and worship of a Godhead based on Artificial Intelligence.”
That alone proves how deleterious overimmersion in tech is for human thinking (ot to mention its spiritual naivete).
This time, says Levandowski, we will be “able to talk to God, literally, and know that it’s listening.”
Write a new Bible?
Why not. ChatTestament.
The true danger, O’Connor so ably argues, is not what AI will be able to do but what we think it will do and our concomitant reverence for it—when actually, no matter the trillions of computations, it essential intelligence will not match a five-year-old, and it will never have true creativity (which is generated in the spirit).
Yet you have folks like Dr. Edmund Furse, a Catholic and AI expert, who promotes the deception and insists that one day AI robots will not only be baptized but become priests (with a relationship with God equal to our own).
Man as Creator.
In the end, the Vatican once warned, it’s not AI that would be the new “god,” but ourselves.
Notes O’Connor, “‘AI,’ whatever form it takes, is a process that runs on a computer of some sort. Let’s remember that any talk of ‘the cloud’ is as much marketing drivel. There is no AI system whose operations are mysterious like those of the human soul; any so-called cloud-computing just refers to one computer accessing another through the internet. Whatever AI is or ever will be, it is only a description of the operations of its physical components.”
Whenever electric current ceases, so does AI’s operation. It is a system of “logic gates” each of which is a few wires connected to a transistor, resistor, and/or diode–“dead and dumb matter,” in O’Connor’s blunt assessment.
Oh, but how intelligent it seems–and how amazingly brilliant it will seem in the future. No taking that away. The things ChatGPT already can do in a few second are astounding. That’s because electricity moves at the speed of light.
Does AI really reduce to nothing more than a glorified search engine (as O’Connor argues), and “heavily animated plagiarism machine”?
Many are those who fret that the moment is coming when AI will decide to take over the world.
“That will never happen, and can never happen,” he writes.
Yes, a machine in the wrong hands can be devastating; could even end the world. Also, as O’Connor acknowledges, evil spirits could make use of it, as they already play around with electricity.
But as far as AI becoming a true ruler: not so fast.
All it would take to stop is turning off a switch.