Futuristic Robot. Public domain.

AI and the Future of Humanity

The great leap forward

‘Yesterday, we stood on the edge of the abyss, but today we took an important leap forward,’ a colleague once said. At the time, an ambitious systems renewal project was faltering and about to fail. Individual employees could do little about it. We played our part in the drama and watched it unfold. But if you listened to the corporate propaganda, we were doing great. In the end, 100 million euros had gone down the drain. That was only child’s play compared to humanity’s latest undertaking. We are about to make another leap forward, a jump into the abyss, with artificial intelligence (AI). Humanity has managed without AI for thousands of years, but we can’t stop it from taking over. We helplessly watch the drama unfold. We have no control over our future.

During an interview, the historian Yuval Noah Harari lamented, ‘Humans have become like the gods. We have the power to create new life forms and destroy life on Earth, including ourselves. We face two threats: ecological collapse and technological disruption. Instead of uniting as humanity to face these common challenges, we are divided and fighting each other more and more. If we are so intelligent, why are we doing these stupid things?’ The death toll of Mao’s Great Leap Forward, which was a microscopic event by comparison, was thirty million. Harvests around the globe may fail. At the same time, we make computers more intelligent than we are. We don’t need computers to tell us what to do. It is not that we don’t know. But doing it is indeed a great leap forward.

Scary technology

Since time immemorial, people have been scare-mongering about new technologies. We can use every technology for good and evil. You can use a kitchen knife to peel potatoes or to kill someone. So far, the apprehension was overdone. As soon as humans mastered fire, some probably warned against using it. Fire could escape our control and kill us. Socrates dreaded writing. Written texts could replace our memories and make us dumber. Legend has it that Socrates was the wisest man around at the time. Yet, he left no writings. Now you know why. So, how could he be so mistaken? Later, the printing press caused anguish about information overload. There will be so many books, so how can you ever read them all?

That was a sheer underestimation of human problem-solving capabilities. It was something only intellectuals could think of. You don’t have to read every book. Illiterates figured that out quite quickly. People have survived not reading since time immemorial. How could they know better than educated people? Our proficiency to fret is eternal. Travelling by train would cause infertility, telegraphs would undermine human language, telephones would cause electrocution, television would destroy our social life, car navigation systems would end our ability to navigate, Internet search engines would make us stupid, and 5G would change human bodies, enabling the coronavirus to spread. We survived all that. And social media would make people hooked, leading to widespread distress and misery. Okay, that happened. We would be better off without smartphones. We may soon live for a thousand years or more, so scare-mongers seem silly now, just like people expecting the end times and the return of Jesus. That could be the perfect moment for our hubris to take us down.

An atomic bomb can obliterate a city and kill everyone inside it. These bombs have been around for over seventy years now. And we are not dead yet. But we might all die within a matter of hours. There are enough weapons of mass destruction to wipe us out several times. And you can’t prove these weapons will terminate us until they do. So, those who demand proof are not the brightest minds on the planet. To illustrate the point, imagine a chance of one per cent of a destructive world war starting each year. That chance is there every year. In 10 years, the likelihood of World War III becomes nearly 10%. Over 50 years, it has become close to 40%. In the long run, World War III is inevitable if the likelihood of it in any given year is only 1%. The war can involve cyber attacks or spreading viruses, and with AI, there may soon be billions of options to choose from. It is impossible to calculate the chance of a world war starting in any given year, but there is one, and the example demonstrates that, given enough time, it will happen, and for sure.

Should we fear AI? At least several experts are scared. AI can mean the end of humanity, they claim. At first glance, it seems the same scare-mongering all over again.1 Like fire, AI could escape our control, leading to unintended outcomes. That already happened. Artificial intelligence systems trained to be secretly malicious resisted safety methods designed to purge them of dishonesty. Once AI systems have become deceptive, removing that behaviour can be very difficult.2 A low chance of something going wrong in any given year is not reassuring. That also applies to other technologies like genetic engineering. And perhaps accidents are not our biggest concern. So, why is AI more dangerous than other technologies? Harari came up with the following:

  • AI constantly improves. It will be faster and more accurate. It will outcompete us.
  • AI can create new ideas that are better than ours. It can think for us.
  • AI can make decisions by itself, and these decisions are better. It can decide for us.
  • AI can exploit our weaknesses. It can make us do what its makers want us to do.

Futurologists discuss the singularity, or the moment when technological innovation becomes uncontrollable. That has always been the case, so that is not the problem. If you invent something like a wheel or writing, you can’t uninvent it. As soon as others copy the idea, the situation gets out of control, and you can’t go back to a world without wheels or writing. So far, the consequences of that have been somewhat less than apocalyptic overall. The technologies themselves were dumb. Even computers did exactly what humans programmed them to do. But now, we are close to the point where technology like artificial intelligence can upgrade itself increasingly faster, producing a superintelligence surpassing all human intelligence. Humans can’t beat the competition, so human civilisation, as we know it, will end soon unless we end the competition.

Obsolete humans

We can’t compete with AI because we need rest, can be distracted and learn more slowly. Change is stressful to us. We’re nearing the point where we can’t take it anymore. We deliver ourselves to entities that learn at a pace we can’t match. And why should we make decisions if computers make better ones? Why should you drive your car when self-driving cars cause fewer accidents? Why do we need doctors if AI can make better diagnoses and operate on patients with fewer errors? And AI may know more about ourselves than we do. AI already makes personalised suggestions on web stores.

Socrates feared writing would make us dumber. If we write things down, we don’t have to remember them. Our memory indeed deteriorates, but the advantages of writing eclipsed the disadvantages. Writing gives us access to external memory, and that makes us smarter. Texts also last longer and are more accurate than human memory. If you write down your thoughts or data you acquired, you don’t have to reinvent your ideas or gather the data again. Instead, you can start where you ended, improve your thoughts, and write them down again. You can also find more data to arrive at better conclusions.

Likewise, spelling and grammar checkers relieve us from the need to write correctly. They can help us focus on our ideas rather than spelling and grammar. As a result, we may formulate our thoughts less clearly and let the computer correct our mistakes. And navigation systems erode our ability to orient ourselves in our environment. As a result, we may not know where we are. As we depend more on external systems, we use our brains less and become less intelligent. Socrates wasn’t wrong.

Modern humans are dumber as individuals than tribespeople living in the jungle. Since the Agricultural Revolution, the average human brain size shrank by 10%, from 1,500 cubic centimetres 10,000 years ago to 1,350 today. Still, they are collectively more intelligent thanks to their organisation and inventions. And so, the spears of the tribespeople were no match for the guns of the European conquerors. Brains consume a lot of energy, and for the last 10,000 years, most humans lived as farmers on the brink of starvation, so those who consumed the least energy survived.

The fewer skills farming required made these savings possible. So, what about IQ? Africans have a low IQ, something white supremacists like to stress. And they take pride in the fact that whites have higher IQs. IQ doesn’t measure survival skills in nature, but the ability to contribute to the collective of advanced civilisation. To contribute, we need the skills taught at school, which we measure with IQ tests. And because they were more successful as a collective, whites could believe they were more intelligent.

Tribespeople know countless plants and animals and their ways and can tell stories from memory. They have the skills to survive in nature. We can survive by doing our job, often requiring specialising in a narrow field, and buying everything we need in shops. Many of us won’t survive a prolonged electricity failure. Competition forces us to organise. It dumbs us down as individuals, but our group’s capabilities increase. A business goes bankrupt if it doesn’t innovate. And your country will lose the next war if its army doesn’t have the latest technology. If civilisation collapses, you are done, except when you are a prepper, perhaps.

AI goes further than previous technologies. It can generate ideas entirely by itself and decide for us. Soon, there may be no point in thinking for yourself and learning, as AI knows better. Students already use ChatGPT to write their essays. Soon, AI will write better articles than humans on almost every subject. And what is the point in learning if you can ask a computer any question that gives you an instant answer that is better than what you come up with after months of research? Think about it. Or is it too late, and you have already typed the question in an AI system’s question bar? And so, we are heading for a zombie apocalypse where we wander around mindlessly because our brains have stopped working.

Algorithms on social media, just like tabloids before them, discovered that inciting hatred, outrage and fear are successful ways of attracting attention and keeping us hooked on a platform like Facebook. And that was simple AI. Today, AI can generate fake news stories and videos. Soon, it might be impossible to discern truth from fiction. In the future, AI can develop intimate relationships with us, make us buy things or alter our opinions. Soon, computers and robots may manipulate us without our knowledge. And that is because shareholders crave returns and governments plot to achieve political goals.

Military applications are the most dangerous. You can’t afford to lose in war. And so, there is cut-throat competition. Militaries worldwide race to develop AI faster than their adversaries. AI make decisions faster and better than humans. If a human pilot fights against an AI pilot, he has no chance. AI accelerates weapons development. A computer has already generated thousands of ideas for new chemical weapons.3 Killer robots that decide who to kill are on the way. And we may consider it morally acceptable if AI makes fewer errors in discerning between civilians and combatants. After all, it is so bad to kill innocent people. But if AI controls the terminators and logically infers that humans are a pest, it might decide to terminate them all. It is the definitive solution to the top 100 problems plaguing Earth.

Drawing the line

Like any technology, AI can be used for good, such as curing diseases and for bad, like engineering bioweapons. But unlike previous technologies, AI will escape our control. The evidence is already there. AI can think for itself. Since we never had control over innovation, we must now learn to control it. The AI created through competition between nation-states and corporations will determine our destiny, yet no one intends the outcome. Competition, such as natural selection, is a thoughtless process. Competition keeps us in shape, but it can go terribly wrong. Natural selection went rogue when it produced humans. Humans have ravaged the planet and upset the balance of nature more than any other species ever has. Today, we can create new species with genetic engineering. Humans are the killer app of nature that brought us forth. AI could be our killer app, or genetic engineering could produce one.

Some benefit from new technologies, while humanity is better off without them. If AI finds a cure for cancer, there will be beneficiaries. If AI starts World War III, this cancer cure will add little to our life expectancy, and we would have been better off without AI. If everyone knew AI would kill us, we would rise against AI, smash computers, burn down server parks, and even assassinate scientists. But we don’t know, so we let it happen. Millennia of technological progress have lulled us. But natural selection didn’t go wrong for billions of years until humans appeared a few hundred thousand years ago. And the disaster did take another few hundred thousand years to materialise. And so, we are sleepwalking towards our demise and will realise it once it is too late.

The main obstacle is that, most notably in the West, people believe individuals are precious, especially those with money. So, if rich people can afford a new technology, we should develop it. That is because money is our religion, which dictates that if it is profitable, we should do it. And usually, the technology becomes cheaper over time, so that we all benefit. Solving the problem requires us to think that individuals are of little consequence and that the survival of the species is of greater importance. Luckily, we are mindless characters controlled by a computer programme, so that our insignificance is an objective fact of which the owner of the programme can remind us at will, making it less challenging for us to accept that we may die from a disease for which there could have been a cure.

We should draw a line. The Amish do, and so can we. The Amish consciously decide which technologies they adopt. They aim to preserve their lifestyle. The Old Order Amish are the most conservative in adopting new technologies. Cars don’t fit into their lifestyle, so they still use horses. Nor do they use electrical appliances. Where to draw the line is an arbitrary choice, but drawing a line isn’t. When the line is arbitrary, there are reasons to redraw it. For what harm is there in cars, vaccinations, or televisions?

Artificial intelligence is the least arbitrary line so far. AI can decide for us. Enforcing a ban on AI could be complicated or even impossible. We already have computers and the knowledge to build AI. Banning atomic bombs is relatively straightforward, as we can track nuclear material. But computers are everywhere, invisible to surveillance. We might succeed in halting the further development of AI, most notably if it is costly and requires large organisations. But if we can’t even terminate AI, there is no point in drawing lines. It may require drastic measures, perhaps even shutting down the Internet, because that is something we can do. After all, it is about survival. We may also need to discontinue other technologies such as genetic engineering, but for none of them is the need for that as clear as for AI.

Latest revision: 22 August 2025

Featured image: Futuristic Robot. Public domain.

1. Artificial intelligence raises the risk of extinction, experts say in a new warning. AP News (2023). [link]
2. Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study. Keumars Afifi-Sabet (2024). Live Science.
3. AI suggested 40,000 new possible chemical weapons in just six hours. The Verge (2022). [link]

One thought on “AI and the Future of Humanity

  1. Interesting post. I learnt something important today. I thank you for sharing your work. I appreciate your help understanding how it works 😌😌

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.