GOD, AI AND THE END OF HISTORY

I love John Lennox. He is a gem, a gift to the Christian world of teaching.

This video is Professor John Lennox on the subject of God, AI, and the end of history. Largely it is about understanding the book of Revelation in an age of intelligent machines. For those that do not have time to watch the video I have reproduced most of the content below.

“I’m your host, Dr. Peter Saunders. I’m the chief executive of ICMDA, which is the International Christian Medical and Dental Association. And this webinar is brought to you tonight in combination with the Forum of Christian Leaders as well. ICMDA brings together about 60,000 Christian doctors and dentists from over 100 affiliated movements.

So John, it’s a pleasure to have you here. John is professor of mathematics emeritus at Oxford University and fellow in mathematics and philosophy of science at Green Templeton College Oxford. As we know John has debated a number of prominent atheists including Richard Dawkins, Christopher Hitchens and Peter Singer. But tonight we are exploring a question that sits at the intersection of theology, technology, and human identity. How should Christians think about artificial intelligence in the light of scripture? And particularly in the light of the book of Revelation, we live in a moment of extraordinary technological acceleration. AI is now diagnosing disease. Is it shaping economies, influencing behaviour, and increasingly mediating how power is exercised in all spheres? And for many Christians, this raises urgent questions. Are these developments morally neutral tools? Do they echo biblical warnings? Or are we in danger of reading tomorrow’s headlines too quickly into ancient prophecy? So, our guest, Professor John Lennox, has spent decades helping believers think clearly at the interface of science, philosophy, and faith. And in his recent book, God, AI, and the End of History, he brings that same clarity to one of the most understood, misunderstood, and often sensationalized areas of the Bible, the book of Revelation. So our goal tonight is is not speculation, fear, or date setting, but rather it’s discernment, understanding what scripture actually teaches, what AI truly is, and how Christian hope, ethics, and wisdom should shape our response in an age of intelligent machines.

Professor Lennox, thanks so much for for joining us tonight. It’s my pleasure to be with you. So you have debated leading atheists and you’ve written extensively on science and faith. Why did you feel compelled at this stage of your life, at this stage in history, to write about AI and revelation?

Well, some years ago, there was a great deal of discussion on the Genesis claim that human beings are created in the image of God versus the claims of technology to enhance humans by AI to such an extent that we might need to revisit what we meant by a human being. And a conference of Christian leaders was arranged in London to discuss this. And I was asked to give the opening talk on what Genesis taught about human beings. The invitation made me curious to delve into the technology and I saw very rapidly that AI was going to raise some very big questions not only for Christians but for everybody. And that’s how I got started on the book entitled 2084 which appeared in 2020. Now in that book since much of the talk about AI was concerned with the future I began to compare the promises of the transhumanists with biblical teaching about the future. And I pointed out that some of the futuristic AI scenarios envisaged by people like physicist Max Tegmark in his book Life 3.0, I pointed out that they were uncannily parallel to biblical teaching on the future, in particular in the book of Revelation. And this aspect of my book generated a lot of interest. And so I thought that I should try to write something to demystify the book of Revelation and make it accessible and to link it with a book that I had already written on the prophecy of Daniel, a book entitled Against the Flow.

The publishers of my book on Revelation were very enamoured with the bits on the technology and so they wanted it inserted in the title and hence we’ve got this title God AI and the end of history but that has confused many people to think that this is my latest book on artificial intelligence. So, let me clear that up. First of all, Peter, it isn’t. My latest book on AI was published last in 2024, and it’s the updated version of 2084. How AI shapes our future. It’s twice as large as the original book and shows just how much has been happening in those four years. That is my most recent book on AI. This book is an exposition of the book of Revelation, but with a careful eye on technology. And so it really is an exposition of the book of Revelation in an age of intelligent machines. So that’s where it comes from. We’re going to get into the book of Revelation uh fairly shortly, but but uh let’s just think about definitions first of all before we talk about revelation. What is artificial intelligence actually and and what is it not? Well, the first thing to realize that artificial intelligence is artificial. It’s not real. In other words, take the simplest kind of AI system. It is essentially computing and it’s a system designed to do one and only one thing that normally requires human intelligence. So the intelligence is simply the simulation. To use the words of Alan Turing who was the genius that really started computing off and raised these questions during the wartime when he built and solved the problem of the enigma machine. It plays a simulation game and one of the big problems with it is it uses words like intelligence, like machine learning and so on that anthropomorphize what is a mechanical and computing system and make people think that it is conscious. It is not conscious. The genius of God in creating human beings that he has linked intelligence to consciousness. These machines are only intelligent in the sense that they can mimic what normally takes human intelligence. Now there are two sorts. There’s narrow AI, which is the AI that we’re mostly familiar with. And then there’s a more speculative artificial general intelligence. And that is the attempt to create a system that can replicate everything that a human being can do, but do it much faster and do it much more expertly and so on. So that there’s a big push in that direction, but at the same time it’s the side of the whole topic that lends itself to science fiction and a great deal of hype. And one of my reasons for writing Peter was to try and demystify it and say what AI is and what it is not. Now let’s give concrete examples just briefly because medicine is one of the areas that has benefited hugely from narrow AI. Let’s take a system that works very well. We have a large database and in it are X-ray pictures of man lungs exhibiting different lung diseases and they’re labelled by the best experts in that field in the world. Those are put in a database. Let’s say there are a million X-ray pictures in the database. Then an X-ray is taken of your lungs because you’re worried about your breathing. And very quickly, the AI sifts through by using pattern recognition statistical techniques and compares your lung X-ray with the million in the database and very rapidly says you are most likely to be suffering from this particular disease. And as a diagnostic tool, very often this will be much better than you get at your local hospital. Now that is being rolled out over very wild fields of medicine with very great success. So that is one positive example. But just to go on the negative side immediately to show that there’s an ethical problem here. pattern recognition, facial recognition technology is very advanced at the moment. It can pick a terrorist out of a football crowd and is therefore very useful to a police force. But that kind of recognition can be used for intrusive surveillance of a population, perhaps a minority population such as is happening in Sing Jang in China with very horrifying results. So what enables criminals to be recognized which we would say this is positive can be used for controlling populations. So that even narrow AI which is so sophisticated snow that it can recognize a person not simply from the front by their face but from the rear by their gate scan be used to control populations. So immediately we’re straight into the ethical problem and the argument is you give up your privacy and we’ll give you security. So that’s a whole debate in its own right. So That’s an example of um narrow AI and there are many many examples but of course we’re pushing forward very rapidly in putting narrow AI systems together and there is advance on many many fronts and one of the big steps forward has been the introduction of so-called large language models like chat GPT And this year it has taken a quantum leap forward just within a month or so. So that it is quantitatively very different from what has happened before and we can discuss that as we can as we go on. So, artificial intelligence is capable of a huge range of different task and and that’s changing exponentially month by month as we go on. But what is what is AI not capable of doing? Well, of course, negatives are very difficult to quantify and there are several things that it was felt would never been so would never be solved. And one of them in science which is a fascinating question is how do protein structures fold? That was a 50-year-old problem. And the amazing thing is that an English mathematician, a genius, he won the Nobel Prize for it. Deus Hassabis solved the problem so effectively that she was able to work out the folding of over 200 million proteins which is staggering. So what people say one day is impossible turns out to be possible the next day and chat GPT has refined its capacities absolutely amazingly. For example, just recently I was asked to do a film illustrating what Jesus meant in John 11 when he said to the disciples who were scared of going back to Jerusalem because it was suicidal. And Jesus said to them, “Are there not 12 hours in the day? If a person walks in the day, they don’t stumble because they see the light of this world that is the sun. But if they walk at night, they stumble because the light is not in them. In other words, we are not bioluminescent. So I asked GPT, please construct a scenario that would get this across. And what it produced in about 30 seconds was absolutely brilliant and usable. So it then asked me, it said, “Since you want to film this, would you like directions for the cameras?” And it spouted a whole scenario, how many cameras, where they should be situated, and all the rest of it. And this is quite amazing. But what it can’t do, I think it’s important since this is not real intelligence. It’s not conscious. So it’s not aware. So the main thrust here is this. As human beings made in the image of God, we can experience what are called quailia. We can smell the wonderful scent of a rose. We can feel the sea breeze on our faces. We can perceive the beauty of the universe as we look through a telescope. Quailia are unknown to an artificial intelligence. It can have no idea of them. It has no ideas at all because it doesn’t think in the same way as human beings do. And so although AI has been used and is increasingly so to produce some level of robotic companionship, it can never replace, I believe, the fellowship that is possible between human beings. And of course, and we’ll probably talk about this later on, when it comes to relationship with God, of course, AI knows nothing of God. So, as you said, the book of Genesis tells us that human beings are made in the image of God. You’ve alluded to consciousness, sensation. What other uniquely human things will AI never be able to do? Well, the question of values, AI knows nothing about values or right or wrong. And human beings are moral beings made in the image of God. And if I may say so, this is one of the places where the transhumanist vision of using AI to perfect humans and to make them into God’s fails. No utopia can ever be built without facing the problem of human sin and rebellion against God. Those two concepts mean nothing for an artificial intelligence. And so one of the richest kinds of human experience from a Christian perspective is that relationship with God through Christ where we understand that Christ has died for our sins and has taken our guilt away and we can have a relationship with God. AI can never replace it or come near it or know anything about it. Which means, Peter, I think that we need to step up much more in emphasizing these absolutely uniquely positive things about the Christian faith that give human beings dignity because AI is very rapidly reducing human dignity. One of the main areas where this is happening is the area of work. Dario Amado Amade is the CEO of Anthropic, one of these multi-billion dollar companies. and he has written an essay just a week or two ago which is well worth reading warning that possibly within 2 years from now the advances in AI are such that 50% of all white collar jobs will be taken over by artificial intelligence in the medical world in the legal world for example there they set up a test and had a very complicated legal legal brief considered and examined by an AI system and by 16 lawyers, top lawyers. The lawyers got 60% of it right, whereas the AI got 96% of it right. And these things for which lawyers are paid a great deal, conveyancing, setting up contracts, all this kind of thing are now at the stage where they can be reproduced almost instantaneously. One of the most interesting things is an article that appeared in the Times last week by Matt Selman who was writing. He is a software developer and creates apps and he runs an AI company and he came to a realization as a result of the leap forward this year that is at the beginning of February, beginning of this month. He said, “I spoke in English and dictated what I wanted from this particular app.” He said, “I left it and came back a number of hours later and found the thing ready for use. The AI had written thousands of lines of code. It had then set up the app and tested it as a human would do, pressing all the buttons, refining the things that were inadequate and so on. And this is the key thing because up until now most of us have regarded AI as a tool rather than an agent. But AIS are now showing signs of agency in a very restricted but real sense. And he said this particular system was making decisions about how human beings might use this that I’d never thought about. And the thing was perfect. And he said, I suddenly realized I haven’t got a job anymore. And he says, it’s coming to all of you. And we need to really be very realistic about this, Peter. This is more scary than anything for people with all of these jobs. It used to be said a few years ago that if you wanted to keep up with the curve, you went into computer science. But now the coding can be done by the AI system. It can think of the codes and put them in. But this scary agency thing I’d like to say something about because it needs Christians to think very carefully about this that the AI that he was using. He said one of the problems and he gave an example is this. If you feed into the system a very big overarching goal, make money for example, and what the system is dealing with is feeding young people with material in their smartphone. It will investigate all sorts of ways of maximizing not only their attention to keep doom scrolling but also their attachment which is now a major feature. So that it will use all kinds of things that the designers of the AI system itself never thought of including going into the dark world to keep their attention and to make profit. It’s a version of the old story of the AI told to make paper clips and it turns the whole universe into a paperclip sourcing factory and regards humanity as irrelevant and destroys them all. But there’s a serious aspect to that and this is why you have even Nobel Prize winners in this field stepping up and saying that they are scared that they can’t control this stuff. They don’t really know what it’s doing or what’s happening. And that poses a huge problem because the control of it is being vastly outpaced by the developments. So those are some of the things that we need to factor into our thinking.

A SUPERSONIC AI TSUNAMI IS COMING

Elon Musk describes what’s coming as a Supersonic Tsunami of converging exponentials. AI isn’t improving linearly anymore. We’re watching three exponential curves hit their inflection points simultaneously: compute scaling, model capabilities, and infrastructure deployment. When exponentials converge, you don’t get incremental progress. You get phase shifts.

Let me give you the raw numbers that demonstrate just how fast this is moving. What’s happening with AI revenue right now is unprecedented in the history of business. Anthropic hit $14 billion in annualized revenue in February 2026, growing from $1 billion just 14 months earlier. That figure has since surpassed $19 billion, more than doubling from $9 billion at the end of 2025. There is simply no precedent for this in B2B software.

And yet most people do not know who Anthropic is and what they do. Also, to understand what that means: Anthropic’s monthly revenue run rate is now roughly $1.6 billion per month, and it keeps accelerating. Anthropic projects as much as $70 billion in revenue by 2028.

OpenAI reached $25 billion in annualized revenue at the end of February 2026, up from $21.4 billion at year-end 2025, with full-year 2025 revenue coming in at $13.1 billion. Both companies are now valued in the hundreds of billions, Anthropic at $380 billion following its $30 billion Series G. OpenAI’s most recent private round in February 2026 valued it at approximately $730 billion, with an IPO potentially targeting a $1 trillion valuation.

Nvidia’s, Jensen Huang recently finalized a $30 billion investment in OpenAI and a $10 billion investment in Anthropic, and told investors these will likely be Nvidia’s last private investments in either company, because both are heading toward public markets. Think about that: the CEO of Nvidia, who has better visibility into AI infrastructure demand than anyone on Earth, made $40 billion in bets on these two companies as his final pre-IPO move.

What’s driving this revenue? It’s not IT budgets anymore. The models — Claude from Anthropic, GPT-5 from OpenAI — have crossed a threshold. They’re now competing with labour budgets.

Companies aren’t buying AI to replace servers. They’re buying AI to augment and ultimately displace human labour.

What’s the breakthrough use case? Coding. Claude Code (Anthropic’s agentic coding tool) now has run-rate revenue above $2.5 billion, having more than doubled since the beginning of 2026. Business subscriptions have quadrupled since the start of the year, and enterprise use has grown to represent over half of all Claude Code revenue.

Now you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. Just intelligence as a utility. Consumers pay $20/month. Enterprise power users pay $200/month. And companies are spending millions per year because the ROI is there.

The Infrastructure Equation

Here’s the infrastructure reality that almost nobody is talking about loudly enough.

The five largest US hyperscalers — Microsoft, Alphabet, Amazon, Meta, and Oracle — have collectively committed to spending ~$690 billion on capital expenditure in 2026 alone, nearly doubling 2025 levels. The vast majority is directed at AI compute, data centers, and networking.

Total global AI spending is forecast to hit $2.5 trillion in 2026, a 44% increase over 2025, according to Gartner. Data centers, GPUs, power generation, chip fabrication. This is the largest infrastructure buildout in the history of technology, by a wide margin.

The rule of thumb in this industry: roughly $50 billion per gigawatt of infrastructure, and approximately $10 billion of annual revenue per gigawatt. Energy equals intelligence.

On a recent earnings call, Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade. TechCrunch

This isn’t hype. This is capital deployment at a scale that rewrites the rules of what’s possible. When you’re spending $50 billion on a single data center and generating $10 billion a year in revenue from it, you’re not building a product… you’re building a new economic substrate. You’re building the electricity grid of the 21st century.

The tsunami is here. The question is whether you’re building on the wave or getting buried by it.

AI: The Capability Jump

Those revenue numbers I just showed you are driven by real capability breakthroughs happening right now.

Start here: neuromorphic chips just solved complex physics simulations at 1,000x better energy efficiency than supercomputers. That’s not 10% better. That’s three orders of magnitude. When compute gets that cheap, you don’t just do the same things faster. You do entirely new things that were economically impossible before.

Drug discovery moves from weeks on supercomputer clusters to hours on desktop chips. Climate modeling that required national labs runs on university hardware. Real-time protein folding for personalized cancer treatment becomes viable. This is Dematerialization, demonetization, and democratization followed by disruption (four of the Six D’s) in action.

Meanwhile, China’s DeepSeek launches V4 next-gen models through Huawei and Cambricon instead of U.S. chips. The AI race is officially multi-polar. OpenAI is preparing for the largest AI IPO in history.

And NVIDIA releases Alpamayo — the “ChatGPT moment for the physical world” — bringing reasoning to autonomous vehicles.

What it means: AI just moved from virtual to physical, from U.S.-dominated to globally distributed, and from expensive to radically cheap. All in the same week. And the revenue is proving it’s not experimental anymore: companies like Palantir, the U.S. military, and NVIDIA are running this in production for existential wartime operations.

Energy: Solving the Bottleneck

The elephant in the room: AI requires massive power. Those $50 billion data centers being built need gigawatts of electricity – and the grid was never designed for this.

Global electricity demand from data centers is set to more than double by 2030, reaching around 945 terawatt-hours: roughly equivalent to Japan’s entire annual electricity consumption. In the United States alone, data centers will account for nearly half of all electricity demand growth between now and 2030. AI will drive most of this increase, with electricity demand from AI-optimized data centers expected to more than quadruple by 2030.

Lawrence Berkeley National Laboratory projects U.S. data center electricity demand will grow from 176 TWh in 2023 to between 325 and 580 TWh by 2028 — representing up to 12% of total U.S. electricity consumption.

The grid was simply not built for this. Interconnection queues are backed up two to three years, transmission permitting takes a decade, and the power plants needed don’t yet exist. In just northern Virginia, a 2024 voltage fluctuation triggered the simultaneous disconnection of 60 data centers, a preview of what grid strain at scale actually looks like.

But look at what’s happening to solve it.

Nuclear Fusion is converging – fastChina’s “Artificial Sun” EAST reactor recently breached a major fusion plasma density barrier that researchers had long considered impossible to cross. In 2025, France’s WEST tokamak sustained plasma for over twenty minutes, while EAST maintained high-confinement plasma for nearly eighteen minutes — demonstrating the levels of stability required for commercial operation.

On the private side, the race has never moved faster. Commonwealth Fusion Systems has raised nearly $3 billion, including investments from Nvidia and Google, with the ultimate goal of a 400-megawatt power plant — enough to power around 280,000 average U.S. homes. CFS’s SPARC demonstration machine is expected to produce its first plasma in 2026 and achieve net fusion energy shortly after — the first commercially relevant design to produce more power than it consumes. That paves the way for ARC, their grid-connected power plant, targeted for the early 2030s.

Helion Energy has also begun construction of its first commercial fusion plant, designed to supply power directly to Microsoft’s data centers starting from 2028.

Private fusion investment has mushroomed, growing to $10.6 billion between 2021 and 2025, with the number of private fusion companies more than doubling from 23 to 53 in the same period.

The timeline is compressing. “Fusion in 30 years away” is becoming “Fusion this decade.” Fusion timelines are collapsing in real time — and AI is actually helping accelerate the plasma physics research itself. The irony: the technology that creates the power problem may also be helping solve it.

The wild card: Tesla Terafab: On March 14, 2026, Elon Musk announced on X that the “Terafab Project launches in 7 days” (March 21st).

So, what is Terafab? Musk first outlined the concept at Tesla’s 2025 shareholder meeting, describing a chip fabrication facility comparable in scale to TSMC’s largest plants. During Tesla’s January 2026 earnings call, he confirmed the company would “have to build a Tesla TeraFab: a very big fab that includes logic, memory and packaging, domestically” to avoid hitting a hard ceiling on chip supply in three to four years.

The facility is designed to produce between 100 and 200 billion custom AI and memory chips per year, with an initial target of 100,000 wafer starts per month and an ambition to scale toward one million, roughly 70% of TSMC’s total output, concentrated in a single U.S. facility. The project carries an estimated cost of approximately $25 billion. Tesla’s fifth-generation AI chip, AI5, is expected to be among the first products fabricated at Terafab, with small-batch production in 2026 and volume production projected for 2027.

To be precise: March 21st almost certainly marks the formal kickoff: a groundbreaking or announcement event, not a fully operational fab. Semiconductor fabs of this scale take years to build and commission. But the signal matters enormously. Tesla is joining Apple, Google, Amazon, and Microsoft in a new category of tech company: one that controls its own silicon. When the largest AI compute consumers own their own chip supply chains, the semiconductor industry is permanently restructured.

What It All Means: The energy bottleneck that threatened to constrain AI is being attacked from every direction simultaneously: fusion physics breakthroughs, private capital pouring into next-generation reactors, nuclear power plant revivals, and vertical integration of the chip supply chain. This is abundance thinking in action. When problems get big enough, fast enough, the solutions scale to match.

The constraint isn’t permanent. It never was.

The Supersonic Tsunami: How It All Connects

Here’s what Elon understood: these are not separate trends. They’re one interlocking system.

Neuromorphic chips make AI 1,000x more efficient → inference becomes cheap enough to deploy everywhere → agentic systems run locally in robots and cars. Fusion energy solves the power bottleneck → enables massive AI training clusters → next-gen frontier models get deployed in humanoids → robots work in any environment and can be launched to orbit on Starship for space manufacturing.

And the capital is already flowing. $1 trillion in infrastructure. $50 billion data centers generating $10 billion annually. Companies going from $1 billion to $14 billion in 14 months. This is not speculation…. it’s deployment at a scale that’s rewriting the rules.

The companies being built right now aren’t competing with 2024 business models.

Today’s companies are competing in an “Abundance Economy” where everything becomes possible, where intelligence is free, energy is abundant, labour is robotic, and orbital access is cheap.

As well, the professions are capitulating faster than the machines can replace them. An AMA survey found 81 percent of physicians now use AI, more than double the 2023 rate. New US Senate guidelines permit aides to use Gemini, ChatGPT, and Copilot for official work.

 Large language models, multimodal reasoning systems, and humanoid robots are not displacing one type of work — they are displacing all types of work, and the economic value of human time itself, across every sector, simultaneously.

There is no adjacent labor category to retrain into. The escalator that carried workers from disrupted industries to new ones for two centuries has no destination… it is crumbling.

That future isn’t ten years away. It’s arriving now and deploying over the next 12-24 months.

This will cause chaos particularly for Gen Z. How do they prepare for work in the AI era? Biblical prophecy reveals that in this world that no longer believes that God is in control. and that a spiritual war is intensifying as Satan the prince of this world does his utmost to retain rulership of the world, people worldwide will embrace Satan’s Antichrist ruler that has supernatural powers and promises peace and prosperity. Watch as Biblical end times prophecies unfold in our time.

WHAT YOU NEED TO KNOW ABOUT AGI

For those of you that follow my blog know that I am a Christian that has received the Holy Spirit to be my counsellor, teacher, helper and comforter. I allow Him to guide my steps each day. Why do a post on A.G.I? God has given me a talent for business and technology and he expects me to keep up and use it for good.

Biblical prophecy reveals we are in the end times prior to Jesus return to restore righteousness and initiate His 1000 year reign to fulfill the covenants God made with Abraham, Isaac and Jacob when He established the nation of Israel for His purposes. Want to know more about what is next on God’s agenda for planet Earth go to http://www.millennialkingdom.net. We will certainly be using AI in Jesus Millennial Kingdom.

Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks. Beyond AGI, artificial superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.

Creating AGI is a stated goal of AI technology companies such as OpenAIGooglexAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term “strong AI” for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) can solve one specific problem but lacks general cognitive abilities. Some academic sources use “weak AI” to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).

Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:

Many interdisciplinary approaches (e.g. cognitive sciencecomputational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems exhibiting these capabilities are now widespread, with modern large language models demonstrating computational creativityautomated reasoning, and decision support simultaneously across domains. Earlier systems such as evolutionary computationintelligent agents, and robots demonstrated these capabilities in isolation, but the convergence of multiple cognitive abilities within single architectures from GPT-3.5 onwards marked a qualitative shift in the field.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including: The Turing Test (Turing)

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behaviour and may incentivize artificial stupidity.

Proposed by Alan Turing in his 1950 paper “Computing Machinery and Intelligence”, this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge that it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine. Turing described the test as follows: The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be experts about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant scepticism from the AI research community, who questioned the test’s implementation and its relevance to AGI. In 2023, Kirk-Giannini and Goldstein argued that while large language models were approaching the threshold of passing the Turing test, “imitation” is not synonymous with “intelligence”. This distinction has been challenged on scientific grounds: neuroscience has established that biological intelligence arises from electrochemical signalling between neurons — a purely physical process with no known non-physical component. Both biological neural networks and artificial neural networks are physical systems processing information according to physical laws; to claim that one substrate produces “real” intelligence while the other produces “mere imitation” despite equivalent observable behaviour requires positing a non-physical property unique to biological matter — a position in tension with modern science and akin to substance dualism. A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%). A 2025 pre‑registered, three‑party Turing‑test study by Cameron R. Jones and Benjamin K. Bergen showed that GPT-4.5 was judged to be the human in 73% of five‑minute text conversations—surpassing the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed the test. The Robot College Student Test (Goertzel)A machine enrols in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. The Employment Test (Nilsson) A machine performs an economically important job at least as well as humans in the same job. This test is now arguably passed across multiple domains. In knowledge work, frontier large language models are deployed as autonomous agentic systems handling software engineering, legal research, financial analysis, customer service, and marketing tasks. The Ikea test (Marcus) Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. As early as 2013, MIT’s IkeaBot demonstrated fully autonomous multi-robot assembly of an IKEA Lack table in ten minutes, with no human intervention and no pre-programmed assembly instructions — the robots inferred the assembly sequence from the geometry of the parts alone. In December 2025, MIT researchers demonstrated a “speech-to-reality” system combining large language models with vision-language models and robotic assembly: a user says “I want a simple stool” and a robotic arm constructs the furniture from modular components within five minutes, using generative AI to reason about geometry, function, and assembly sequence from natural language alone. The Furniture Bench benchmark, published in the International Journal of Robotics Research in 2025, now provides a standardised real-world furniture assembly benchmark with over 200 hours of demonstration data for training and evaluating autonomous assembly systems. The Coffee Test (Wozniak) A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This test has been substantially approached across multiple systems. In January 2024, Figure AI‘s Figure 01 humanoid learned to operate a Keurig coffee machine autonomously after watching video demonstrations, using end-to-end neural networks to translate visual input into motor actions. In 2025, researchers at the University of Edinburgh published the ELLMER framework in Nature Machine Intelligence, demonstrating a robotic arm that interprets verbal instructions, analyses its surroundings, and autonomously makes coffee in dynamic kitchen environments — adapting to unforeseen obstacles in real time rather than following pre-programmed sequences. China-based Stardust Intelligence demonstrated its Astribot S1 using Physical Intelligence‘s model to make coffee from the high-level command “make coffee”, with the system identifying objects such as mugs and coffee makers even when misplaced or in unexpected locations. Physical Intelligence subsequently reported that its π*0.6 model could make espresso continuously for an entire day with failure rates dropping by more than half compared to earlier versions. The strict form of the test — entering a completely unfamiliar home and navigating it from scratch — has not been formally demonstrated end-to-end, though the combination of LLM-driven reasoning, visual object recognition in novel environments, and autonomous manipulation brings current systems close to meeting the original specification. The Modern Turing Test (Suleyman) An AI model is given US$100,000 and has to obtain US$1 million. This test was arguably surpassed in October 2024 by Truth Terminal, a semi-autonomous AI agent built on Meta‘s Llama 3.1 (with earlier iterations based on Claude 3 Opus). Created by AI researcher Andy Ayrey, Truth Terminal originated from an experiment called “Infinite Backrooms” in which two Claude Opus instances were allowed to converse freely, during which they spontaneously generated a satirical meme religion dubbed the “Goatse Gospel”. After venture capitalist Marc Andreessen donated US$50,000 in Bitcoin to the agent, Truth Terminal’s promotion of the Goatseus Maximus (GOAT) memecoin on the Solana blockchain drove the token to over US$1 billion in market capitalisation within days of its launch — far exceeding Suleyman’s US$1 million threshold. Truth Terminal’s own crypto wallet accumulated approximately US$37.5 million, making it the first AI agent to become a millionaire through its own market activity. The test’s spirit – demonstrating that an AI can generate substantial economic value from a modest starting position — was met, though with caveats: Ayrey reviewed posts before publication and assisted with wallet mechanics, making the agent semi-autonomous rather than fully independent. The General Video-Game Learning Test (GoertzelBach et al.) An AI must demonstrate the ability to learn and succeed at a wide range of video games, including new games unknown to the AGI developers before the competition. The importance of this threshold was echoed by Scott Aaronson during his time at OpenAI. In December 2025, Google DeepMind released SIMA 2 (Scalable Instructable Multiworld Agent), a Gemini-powered generalist agent that operates across multiple commercial 3D games — including No Man’s SkyValheim, and Goat Simulator 3 — using only rendered pixels and a virtual keyboard and mouse, with no access to game source code or internal APIs. Where the original SIMA achieved a 31% success rate on complex tasks compared to humans at 71%, SIMA 2 roughly doubled that rate and demonstrated robust generalisation to previously unseen game environments, including self-improvement through autonomous play without human feedback. Separately, frontier LLMs with computer-use capabilities can interact with arbitrary software through screen observation and mouse/keyboard control, theoretically enabling gameplay of any title, though current implementations remain too slow for real-time performance in fast-paced games. The test has not been formally passed in its strictest sense — a single agent mastering any arbitrary unseen game at human level — but the gap is narrowing rapidly.

AI-complete problems (AI-complete)

A problem is informally called “AI-complete” or “AI-hard” if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.

Many problems have been conjectured to require general intelligence to solve. Examples include computer visionnatural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author’s argument (reason), understand the context (knowledge), and faithfully reproduce the author’s original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According to Stanford University‘s 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”. OpenAI CEO Sam Altman said in December 2025 that “we built AGIs” and that “AGI kinda went whooshing by” with less societal impact than expected, proposing the field move on to defining superintelligence.

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modelling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil’s estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong A.I. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. A fundamental criticism of the simulated brain approach derives from embodied cognition theory, which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

“Strong AI” as defined in philosophy

In 1980, philosopher John Searle coined the term “strong AI” as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:[e]

  • Strong AI hypothesis: An artificial intelligence system can have “a mind” and “consciousness”.
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called “strong” because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a “weak AI” machine would be identical to a “strong AI” machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term “strong AI” to mean “human level artificial general intelligence”. This is not the same as Searle’s strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers, the question is out of scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, “as long as the program works, they don’t care if you call it real or a simulation.” If the program can behave as if it has a mind, then there is no need to know if it actually has a mind – indeed, there would be no way to tell. For AI research, Searle’s “weak AI hypothesis” is equivalent to the statement “artificial general intelligence is possible”. Thus, according to Russell and Norvig, “most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.” Thus, for academic AI research, “Strong AI” and “AGI” are two different things.

Consciousness (Artificial consciousness)

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or “phenomenal consciousness”): The ability to “feel” perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term “consciousness” to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousnessThomas Nagel explained in 1974 that it “feels like” something to be conscious. If we are not conscious, then it doesn’t feel like anything. Nagel uses the example of a bat: we can sensibly ask “what does it feel like to be a bat?” However, we are unlikely to ask “what does it feel like to be a toaster?” Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company’s AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one’s own thoughts. This is opposed to simply being the “subject of one’s thought”—an operating system or debugger can be “aware of itself” (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term “self-awareness”. In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modelling frameworks.

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits of AGI

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, inexpensive and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI’s primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

If you’re not using AI daily in your work, you’re falling behind exponentially. Not linearly. Exponentially.

Let the wise listen and add to their learning, and let the discerning get guidance.”—Proverbs 1:5

Skills may change, but the posture remains the same: humility, growth and a willingness to learn.