AI Was Born of Man and May Bury Him

Artificial intelligence has been around for longer than most of us realize. In 1950, Alan Turing proposed the famous “Turing test” in a paper, opening with the sentence “I propose to consider the question, ‘Can machines think?'” His test, which he called the “imitation game,” was a very simple one. There are three participants: a human judge, a machine interlocutor and a human interlocutor. The human judge asks questions to each interlocutor and receives answers back, not knowing which is the human and which is the machine. The human judge then attempts to distinguish between the human’s and the machine’s responses. If they cannot, then the machine has fooled the human into thinking that the machine was human, and the Turing test has been passed.
Over the past few years, we have seen substantial advancements in generative AI, which can learn patterns from massive datasets and autonomously produce entirely new content.
These new generative AI systems can now pass the Turing test with ease. The first machine to pass the Turing test was Eugene Goostman, a chatbot portraying a 13-year-old Ukrainian boy, which in June 2014 convinced 33% of judges at a competition that it was human — 64 years following the release of Turing’s paper.
Earlier this year, two AI systems, OpenAI’s ChatGPT 4.5 and Meta’s Llama 3.1 405B, passed the test, convincing 73% and 53% of judges, respectively, that they were human.
It’s become abundantly clear that AI development is accelerating rather than decelerating.
Now Microsoft — one of OpenAI’s principal backers — has unveiled its new hand-sized quantum chip, Majorana 1, the raw processing power of which may surpass that of every computer on Earth. Embedding Majorana 1 into next-generation AI architectures could prove decisive in the quest to engineer a truly sentient artificial being.
That’s why many scientists believe there is a significant chance that artificial intelligence wipes some or all of the human race out within the next century. Since artificial intelligence systems — particularly those equipped with quantum chips — could hypothetically produce a pathogen that is 100% lethal, fires all 12,000 nukes on the planet and more. Because of the way AI would be enabled to think, it could map out a path to extinguishing all humans across the planet. That’s why forecasters at the Existential-Risk Persuasion Tournament in 2022 predicted there is a 6% chance of human extinction at the hands of AI by 2100.
We are in the early stages of AI today, despite how advanced it has become. Think of the progress that we have made in technology over the past few decades and how rapidly that progress has advanced in recent years. How unreasonable is it to say that we may have artificially conscious robots living among us — or ones at least living on our screens — within the next decade or less?
And what if those AI systems go rogue and decide humans aren’t worth serving or protecting? Who, then, could stop it? Some lawmakers have proposed “kill switches” to turn off an AI at a moment’s notice, but AI companies have purportedly fought back against that idea, stating that it would stifle innovation — as if that’s a good reason to allow a rogue AI system to destroy us. As recently as last year, California Gov. Gavin Newsom vetoed a bill that has such a provision.
But it’s not just the U.S. we have to worry about. Other countries, particularly China, have been rapidly developing AI systems with little care for the consequences of rogue AI. China has already begun developing robot soldiers, some doglike and some humanoid, for use in battlefields. No doubt, China would like to deploy those robots in wars that they are not directly a part of to assist their allies.
China is moving faster than any other country in the AI and robotics space simply because they do not care about the consequences. Surely China can arm robots with guns and have them go out into the battlefield ready, willing and able to kill enemy soldiers. But those robots have to think while they shoot, and if those robots decide they are fighting for the wrong cause, what might be the result then?
The gloves are off with AI. Even if U.S. tech companies halt in their innovation of these systems, our adversaries like China surely will not. It seems it may only be a matter of time before AI goes rogue. It may be a question of “when,” not “if,” at this point.
You Might Like