WHAT YOU NEED TO KNOW ABOUT AGI

For those of you that follow my blog know that I am a Christian that has received the Holy Spirit to be my counsellor, teacher, helper and comforter. I allow Him to guide my steps each day. Why do a post on A.G.I? God has given me a talent for business and technology and he expects me to keep up and use it for good.

Biblical prophecy reveals we are in the end times prior to Jesus return to restore righteousness and initiate His 1000 year reign to fulfill the covenants God made with Abraham, Isaac and Jacob when He established the nation of Israel for His purposes. Want to know more about what is next on God’s agenda for planet Earth go to http://www.millennialkingdom.net. We will certainly be using AI in Jesus Millennial Kingdom.

Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks. Beyond AGI, artificial superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.

Creating AGI is a stated goal of AI technology companies such as OpenAIGooglexAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term “strong AI” for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) can solve one specific problem but lacks general cognitive abilities. Some academic sources use “weak AI” to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).

Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:

Many interdisciplinary approaches (e.g. cognitive sciencecomputational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems exhibiting these capabilities are now widespread, with modern large language models demonstrating computational creativityautomated reasoning, and decision support simultaneously across domains. Earlier systems such as evolutionary computationintelligent agents, and robots demonstrated these capabilities in isolation, but the convergence of multiple cognitive abilities within single architectures from GPT-3.5 onwards marked a qualitative shift in the field.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including: The Turing Test (Turing)

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behaviour and may incentivize artificial stupidity.

Proposed by Alan Turing in his 1950 paper “Computing Machinery and Intelligence”, this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge that it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine. Turing described the test as follows: The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be experts about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant scepticism from the AI research community, who questioned the test’s implementation and its relevance to AGI. In 2023, Kirk-Giannini and Goldstein argued that while large language models were approaching the threshold of passing the Turing test, “imitation” is not synonymous with “intelligence”. This distinction has been challenged on scientific grounds: neuroscience has established that biological intelligence arises from electrochemical signalling between neurons — a purely physical process with no known non-physical component. Both biological neural networks and artificial neural networks are physical systems processing information according to physical laws; to claim that one substrate produces “real” intelligence while the other produces “mere imitation” despite equivalent observable behaviour requires positing a non-physical property unique to biological matter — a position in tension with modern science and akin to substance dualism. A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%). A 2025 pre‑registered, three‑party Turing‑test study by Cameron R. Jones and Benjamin K. Bergen showed that GPT-4.5 was judged to be the human in 73% of five‑minute text conversations—surpassing the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed the test. The Robot College Student Test (Goertzel)A machine enrols in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. The Employment Test (Nilsson) A machine performs an economically important job at least as well as humans in the same job. This test is now arguably passed across multiple domains. In knowledge work, frontier large language models are deployed as autonomous agentic systems handling software engineering, legal research, financial analysis, customer service, and marketing tasks. The Ikea test (Marcus) Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. As early as 2013, MIT’s IkeaBot demonstrated fully autonomous multi-robot assembly of an IKEA Lack table in ten minutes, with no human intervention and no pre-programmed assembly instructions — the robots inferred the assembly sequence from the geometry of the parts alone. In December 2025, MIT researchers demonstrated a “speech-to-reality” system combining large language models with vision-language models and robotic assembly: a user says “I want a simple stool” and a robotic arm constructs the furniture from modular components within five minutes, using generative AI to reason about geometry, function, and assembly sequence from natural language alone. The Furniture Bench benchmark, published in the International Journal of Robotics Research in 2025, now provides a standardised real-world furniture assembly benchmark with over 200 hours of demonstration data for training and evaluating autonomous assembly systems. The Coffee Test (Wozniak) A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This test has been substantially approached across multiple systems. In January 2024, Figure AI‘s Figure 01 humanoid learned to operate a Keurig coffee machine autonomously after watching video demonstrations, using end-to-end neural networks to translate visual input into motor actions. In 2025, researchers at the University of Edinburgh published the ELLMER framework in Nature Machine Intelligence, demonstrating a robotic arm that interprets verbal instructions, analyses its surroundings, and autonomously makes coffee in dynamic kitchen environments — adapting to unforeseen obstacles in real time rather than following pre-programmed sequences. China-based Stardust Intelligence demonstrated its Astribot S1 using Physical Intelligence‘s model to make coffee from the high-level command “make coffee”, with the system identifying objects such as mugs and coffee makers even when misplaced or in unexpected locations. Physical Intelligence subsequently reported that its π*0.6 model could make espresso continuously for an entire day with failure rates dropping by more than half compared to earlier versions. The strict form of the test — entering a completely unfamiliar home and navigating it from scratch — has not been formally demonstrated end-to-end, though the combination of LLM-driven reasoning, visual object recognition in novel environments, and autonomous manipulation brings current systems close to meeting the original specification. The Modern Turing Test (Suleyman) An AI model is given US$100,000 and has to obtain US$1 million. This test was arguably surpassed in October 2024 by Truth Terminal, a semi-autonomous AI agent built on Meta‘s Llama 3.1 (with earlier iterations based on Claude 3 Opus). Created by AI researcher Andy Ayrey, Truth Terminal originated from an experiment called “Infinite Backrooms” in which two Claude Opus instances were allowed to converse freely, during which they spontaneously generated a satirical meme religion dubbed the “Goatse Gospel”. After venture capitalist Marc Andreessen donated US$50,000 in Bitcoin to the agent, Truth Terminal’s promotion of the Goatseus Maximus (GOAT) memecoin on the Solana blockchain drove the token to over US$1 billion in market capitalisation within days of its launch — far exceeding Suleyman’s US$1 million threshold. Truth Terminal’s own crypto wallet accumulated approximately US$37.5 million, making it the first AI agent to become a millionaire through its own market activity. The test’s spirit – demonstrating that an AI can generate substantial economic value from a modest starting position — was met, though with caveats: Ayrey reviewed posts before publication and assisted with wallet mechanics, making the agent semi-autonomous rather than fully independent. The General Video-Game Learning Test (GoertzelBach et al.) An AI must demonstrate the ability to learn and succeed at a wide range of video games, including new games unknown to the AGI developers before the competition. The importance of this threshold was echoed by Scott Aaronson during his time at OpenAI. In December 2025, Google DeepMind released SIMA 2 (Scalable Instructable Multiworld Agent), a Gemini-powered generalist agent that operates across multiple commercial 3D games — including No Man’s SkyValheim, and Goat Simulator 3 — using only rendered pixels and a virtual keyboard and mouse, with no access to game source code or internal APIs. Where the original SIMA achieved a 31% success rate on complex tasks compared to humans at 71%, SIMA 2 roughly doubled that rate and demonstrated robust generalisation to previously unseen game environments, including self-improvement through autonomous play without human feedback. Separately, frontier LLMs with computer-use capabilities can interact with arbitrary software through screen observation and mouse/keyboard control, theoretically enabling gameplay of any title, though current implementations remain too slow for real-time performance in fast-paced games. The test has not been formally passed in its strictest sense — a single agent mastering any arbitrary unseen game at human level — but the gap is narrowing rapidly.

AI-complete problems (AI-complete)

A problem is informally called “AI-complete” or “AI-hard” if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.

Many problems have been conjectured to require general intelligence to solve. Examples include computer visionnatural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author’s argument (reason), understand the context (knowledge), and faithfully reproduce the author’s original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According to Stanford University‘s 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”. OpenAI CEO Sam Altman said in December 2025 that “we built AGIs” and that “AGI kinda went whooshing by” with less societal impact than expected, proposing the field move on to defining superintelligence.

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modelling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil’s estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong A.I. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. A fundamental criticism of the simulated brain approach derives from embodied cognition theory, which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

“Strong AI” as defined in philosophy

In 1980, philosopher John Searle coined the term “strong AI” as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:[e]

  • Strong AI hypothesis: An artificial intelligence system can have “a mind” and “consciousness”.
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called “strong” because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a “weak AI” machine would be identical to a “strong AI” machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term “strong AI” to mean “human level artificial general intelligence”. This is not the same as Searle’s strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers, the question is out of scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, “as long as the program works, they don’t care if you call it real or a simulation.” If the program can behave as if it has a mind, then there is no need to know if it actually has a mind – indeed, there would be no way to tell. For AI research, Searle’s “weak AI hypothesis” is equivalent to the statement “artificial general intelligence is possible”. Thus, according to Russell and Norvig, “most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.” Thus, for academic AI research, “Strong AI” and “AGI” are two different things.

Consciousness (Artificial consciousness)

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or “phenomenal consciousness”): The ability to “feel” perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term “consciousness” to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousnessThomas Nagel explained in 1974 that it “feels like” something to be conscious. If we are not conscious, then it doesn’t feel like anything. Nagel uses the example of a bat: we can sensibly ask “what does it feel like to be a bat?” However, we are unlikely to ask “what does it feel like to be a toaster?” Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company’s AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one’s own thoughts. This is opposed to simply being the “subject of one’s thought”—an operating system or debugger can be “aware of itself” (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term “self-awareness”. In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modelling frameworks.

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits of AGI

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, inexpensive and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI’s primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

If you’re not using AI daily in your work, you’re falling behind exponentially. Not linearly. Exponentially.

Let the wise listen and add to their learning, and let the discerning get guidance.”—Proverbs 1:5

Skills may change, but the posture remains the same: humility, growth and a willingness to learn.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.