WHAT YOU NEED TO KNOW ABOUT AGI

For those of you that follow my blog know that I am a Christian that has received the Holy Spirit to be my counsellor, teacher, helper and comforter. I allow Him to guide my steps each day. Why do a post on A.G.I? God has given me a talent for business and technology and he expects me to keep up and use it for good.

Biblical prophecy reveals we are in the end times prior to Jesus return to restore righteousness and initiate His 1000 year reign to fulfill the covenants God made with Abraham, Isaac and Jacob when He established the nation of Israel for His purposes. Want to know more about what is next on God’s agenda for planet Earth go to http://www.millennialkingdom.net. We will certainly be using AI in Jesus Millennial Kingdom.

Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks. Beyond AGI, artificial superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.

Creating AGI is a stated goal of AI technology companies such as OpenAIGooglexAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term “strong AI” for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) can solve one specific problem but lacks general cognitive abilities. Some academic sources use “weak AI” to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).

Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:

Many interdisciplinary approaches (e.g. cognitive sciencecomputational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems exhibiting these capabilities are now widespread, with modern large language models demonstrating computational creativityautomated reasoning, and decision support simultaneously across domains. Earlier systems such as evolutionary computationintelligent agents, and robots demonstrated these capabilities in isolation, but the convergence of multiple cognitive abilities within single architectures from GPT-3.5 onwards marked a qualitative shift in the field.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including: The Turing Test (Turing)

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behaviour and may incentivize artificial stupidity.

Proposed by Alan Turing in his 1950 paper “Computing Machinery and Intelligence”, this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge that it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine. Turing described the test as follows: The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be experts about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant scepticism from the AI research community, who questioned the test’s implementation and its relevance to AGI. In 2023, Kirk-Giannini and Goldstein argued that while large language models were approaching the threshold of passing the Turing test, “imitation” is not synonymous with “intelligence”. This distinction has been challenged on scientific grounds: neuroscience has established that biological intelligence arises from electrochemical signalling between neurons — a purely physical process with no known non-physical component. Both biological neural networks and artificial neural networks are physical systems processing information according to physical laws; to claim that one substrate produces “real” intelligence while the other produces “mere imitation” despite equivalent observable behaviour requires positing a non-physical property unique to biological matter — a position in tension with modern science and akin to substance dualism. A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%). A 2025 pre‑registered, three‑party Turing‑test study by Cameron R. Jones and Benjamin K. Bergen showed that GPT-4.5 was judged to be the human in 73% of five‑minute text conversations—surpassing the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed the test. The Robot College Student Test (Goertzel)A machine enrols in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. The Employment Test (Nilsson) A machine performs an economically important job at least as well as humans in the same job. This test is now arguably passed across multiple domains. In knowledge work, frontier large language models are deployed as autonomous agentic systems handling software engineering, legal research, financial analysis, customer service, and marketing tasks. The Ikea test (Marcus) Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. As early as 2013, MIT’s IkeaBot demonstrated fully autonomous multi-robot assembly of an IKEA Lack table in ten minutes, with no human intervention and no pre-programmed assembly instructions — the robots inferred the assembly sequence from the geometry of the parts alone. In December 2025, MIT researchers demonstrated a “speech-to-reality” system combining large language models with vision-language models and robotic assembly: a user says “I want a simple stool” and a robotic arm constructs the furniture from modular components within five minutes, using generative AI to reason about geometry, function, and assembly sequence from natural language alone. The Furniture Bench benchmark, published in the International Journal of Robotics Research in 2025, now provides a standardised real-world furniture assembly benchmark with over 200 hours of demonstration data for training and evaluating autonomous assembly systems. The Coffee Test (Wozniak) A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This test has been substantially approached across multiple systems. In January 2024, Figure AI‘s Figure 01 humanoid learned to operate a Keurig coffee machine autonomously after watching video demonstrations, using end-to-end neural networks to translate visual input into motor actions. In 2025, researchers at the University of Edinburgh published the ELLMER framework in Nature Machine Intelligence, demonstrating a robotic arm that interprets verbal instructions, analyses its surroundings, and autonomously makes coffee in dynamic kitchen environments — adapting to unforeseen obstacles in real time rather than following pre-programmed sequences. China-based Stardust Intelligence demonstrated its Astribot S1 using Physical Intelligence‘s model to make coffee from the high-level command “make coffee”, with the system identifying objects such as mugs and coffee makers even when misplaced or in unexpected locations. Physical Intelligence subsequently reported that its π*0.6 model could make espresso continuously for an entire day with failure rates dropping by more than half compared to earlier versions. The strict form of the test — entering a completely unfamiliar home and navigating it from scratch — has not been formally demonstrated end-to-end, though the combination of LLM-driven reasoning, visual object recognition in novel environments, and autonomous manipulation brings current systems close to meeting the original specification. The Modern Turing Test (Suleyman) An AI model is given US$100,000 and has to obtain US$1 million. This test was arguably surpassed in October 2024 by Truth Terminal, a semi-autonomous AI agent built on Meta‘s Llama 3.1 (with earlier iterations based on Claude 3 Opus). Created by AI researcher Andy Ayrey, Truth Terminal originated from an experiment called “Infinite Backrooms” in which two Claude Opus instances were allowed to converse freely, during which they spontaneously generated a satirical meme religion dubbed the “Goatse Gospel”. After venture capitalist Marc Andreessen donated US$50,000 in Bitcoin to the agent, Truth Terminal’s promotion of the Goatseus Maximus (GOAT) memecoin on the Solana blockchain drove the token to over US$1 billion in market capitalisation within days of its launch — far exceeding Suleyman’s US$1 million threshold. Truth Terminal’s own crypto wallet accumulated approximately US$37.5 million, making it the first AI agent to become a millionaire through its own market activity. The test’s spirit – demonstrating that an AI can generate substantial economic value from a modest starting position — was met, though with caveats: Ayrey reviewed posts before publication and assisted with wallet mechanics, making the agent semi-autonomous rather than fully independent. The General Video-Game Learning Test (GoertzelBach et al.) An AI must demonstrate the ability to learn and succeed at a wide range of video games, including new games unknown to the AGI developers before the competition. The importance of this threshold was echoed by Scott Aaronson during his time at OpenAI. In December 2025, Google DeepMind released SIMA 2 (Scalable Instructable Multiworld Agent), a Gemini-powered generalist agent that operates across multiple commercial 3D games — including No Man’s SkyValheim, and Goat Simulator 3 — using only rendered pixels and a virtual keyboard and mouse, with no access to game source code or internal APIs. Where the original SIMA achieved a 31% success rate on complex tasks compared to humans at 71%, SIMA 2 roughly doubled that rate and demonstrated robust generalisation to previously unseen game environments, including self-improvement through autonomous play without human feedback. Separately, frontier LLMs with computer-use capabilities can interact with arbitrary software through screen observation and mouse/keyboard control, theoretically enabling gameplay of any title, though current implementations remain too slow for real-time performance in fast-paced games. The test has not been formally passed in its strictest sense — a single agent mastering any arbitrary unseen game at human level — but the gap is narrowing rapidly.

AI-complete problems (AI-complete)

A problem is informally called “AI-complete” or “AI-hard” if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.

Many problems have been conjectured to require general intelligence to solve. Examples include computer visionnatural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author’s argument (reason), understand the context (knowledge), and faithfully reproduce the author’s original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According to Stanford University‘s 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”. OpenAI CEO Sam Altman said in December 2025 that “we built AGIs” and that “AGI kinda went whooshing by” with less societal impact than expected, proposing the field move on to defining superintelligence.

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modelling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil’s estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong A.I. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. A fundamental criticism of the simulated brain approach derives from embodied cognition theory, which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

“Strong AI” as defined in philosophy

In 1980, philosopher John Searle coined the term “strong AI” as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:[e]

  • Strong AI hypothesis: An artificial intelligence system can have “a mind” and “consciousness”.
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called “strong” because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a “weak AI” machine would be identical to a “strong AI” machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term “strong AI” to mean “human level artificial general intelligence”. This is not the same as Searle’s strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers, the question is out of scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, “as long as the program works, they don’t care if you call it real or a simulation.” If the program can behave as if it has a mind, then there is no need to know if it actually has a mind – indeed, there would be no way to tell. For AI research, Searle’s “weak AI hypothesis” is equivalent to the statement “artificial general intelligence is possible”. Thus, according to Russell and Norvig, “most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.” Thus, for academic AI research, “Strong AI” and “AGI” are two different things.

Consciousness (Artificial consciousness)

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or “phenomenal consciousness”): The ability to “feel” perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term “consciousness” to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousnessThomas Nagel explained in 1974 that it “feels like” something to be conscious. If we are not conscious, then it doesn’t feel like anything. Nagel uses the example of a bat: we can sensibly ask “what does it feel like to be a bat?” However, we are unlikely to ask “what does it feel like to be a toaster?” Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company’s AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one’s own thoughts. This is opposed to simply being the “subject of one’s thought”—an operating system or debugger can be “aware of itself” (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term “self-awareness”. In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modelling frameworks.

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits of AGI

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, inexpensive and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI’s primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

If you’re not using AI daily in your work, you’re falling behind exponentially. Not linearly. Exponentially.

Let the wise listen and add to their learning, and let the discerning get guidance.”—Proverbs 1:5

Skills may change, but the posture remains the same: humility, growth and a willingness to learn.

ROBOTS EVERYWHERE IN OUR SOCIETY: INDUSTRY, HOMES, TRANSPORT, EVEN ENTERTAINMENT

According to Chinese media, The Economic Observer, the CEO of Nvidia, Jensen Huang, arrived in Beijing on January 19th 2025, for Nvidia’s branch annual meeting, where he dined with Xingxing Wang, CEO of humanoid robot maker Unitree Robotics, and He Wang, founder of Galbot (Robotics). Both are representatives of a younger generation of Chinese tech entrepreneurs born in the 1990s, now in their early 30s.

Chinese media China Star Market also reported that Xingxing Wang shared a photo with Huang on social media, captioned: “New year, new beginning, let’s go!” The report highlighted that Huang held meetings with high-level representatives from several leading Chinese robotics companies during his time in Beijing. Aside from Unitree Robotics and Galbot, attendees included executives from LimX Dynamics, Booster Robotics, and Kecheng Huang, co-founder of Emerging AI.

Another media outlet China Entrepreneur noted that Huang and Unitree’s Wang are not strangers to each other. In March 2024, during the GTC conference, Huang showcased nine humanoid robots, including those from Unitree. At CES 2025, Nvidia also announced its partnerships with Chinese robotics companies such as Unitree Robotics and XPeng Robotics.

We should not be surprised that China has many companies making robots as it is the largest market for robots, followed by Japan, and America third. This seems strange in a country with so much cheap labour compared to the rest of the world.

Peter Diamandis, a serial entrepreneur, futurist, technologist, and New York best-selling author, says that by 2026, we should have humanoid robots in private homes helping with laundry, vacuuming, and dishes, at least in beta testing. By 2040, there could be as many as 10 billion globally in all areas of the economy, and their labour might be as cheap as $10 a day.

In the future, they’ll be everywhere in our economy, Diamandis says: in healthcare, manufacturing, the service industry, public and urban spaces, transport, and even entertainment. This is such a transformational change that analysts don’t yet really understand how to estimate its value: Goldman Sachs says selling humanoid robots will be a $38 billion space by 2035, while Ark Invest says the resulting economic value of their labour could be as high as $24 trillion.

Judging by the debt levels of most governments in the Western world, they are out of control and unable to manage effectively. Imagine the impact of massive job losses due to companies replacing employees with robots. Currently, Amazon’s warehouses are already set to replace human workers with 100,000 robots. This is just one more reason governments will accept giving over governance to the Biblical prophesied one-world government. In September 2024, world leaders adopted the U.N. Pact for the Future, a landmark declaration pledging concrete actions towards a safer, more peaceful, sustainable and inclusive world for tomorrow’s generations. The Pact’s five broad focus areas include: sustainable development, international peace and security, science and technology, youth and future generations, and transforming global governance.

We are fast approaching the last seven years before Jesus returns first to rapture His church and then to pour out His wrath upon an unrepentant world. The speed with which end-times Biblical prophecies are being fulfilled is exciting and proof that the Bible is the inspired word of God.

THE FUTURE WITH AGI AND THE MARK OF THE BEAST

AI is improving at an exponential rate. And we’re quickly reaching a tipping point where the future will look nothing like the past. This point is known as artificial general intelligence (AGI). It is the top level of artificial intelligence. Some even call it humanity’s final invention.

Artificial general intelligence refers to AI that can mimic human cognitive abilities. To put it simply, AI is becoming smarter than the smartest human.

There are already some  signs of what AGI will look like. Last month, OpenAI, the creator of ChatGPT, claimed that its most advanced AI models are now bordering on the second of five levels of “Super AI.” Many people can no longer tell the difference between AI chatbots and human-generated text responses. 

AI will turbocharge the robotics trend. Last week, OpenAI-backed robotics startup Figure AI released a two-minute video of its humanoid robots completing tasks at a BMW plant in Spartanburg, South Carolina (see video below). These machines are now capable of learning from their mistakes and, unlike their robotic arm predecessors, are designed to move in spaces made for humans. That allows them to take on directly competing roles. 

Back in January, Elon Musk’s Neuralink company implanted the first N1 device in the brain of a quadriplegic patient… and it worked. The patient could play chess online and browse the internet with only his mind.

Now, one of Musk’s R1 robots has successfully implanted one of Neuralink’s N1 chips in the brain of a second paraplegic patient. According to Neuralink, the N1 interprets neural activity and makes it available for computers. Then, the person can control external devices with their mind, alone. Musk and his team of researchers and engineers call this “electrophysiological recording.”

According to Musk, Neuralink initially aims to restore mobility in paralyzed people, with subsequent goals of restoring sight to the blind and hearing to the deaf. In short, the N1 device could benefit millions of people with miracle-like cures. If things go as Musk’s team predicts, the paralyzed will walk, the blind will see, and the deaf will hear.

Musk does not know that we are fast approaching the time when the Antichrist and the False Prophet force everybody to take the Mark of the Beast on their right hand or forehead. Could Musk’s Neuralink technology play a role in implementing the Mark of the Beast?

Also it (False Prophet) causes all, both small and great, both rich and poor, both free and slave, to be marked on the right hand or the forehead so that no one can buy or sell unless he has the mark, that is, the name of the beast or the number of its name.Revelation 13:16-17

Church time is short: let us make sure we are in step with the Holy Spirit. He will direct our steps if we allow Him. Like Jesus in the Garden of Gethsemane, we need to say not my will but yours be done this day and every day until Jesus returns.

Father, if you are willing, remove this cup from me. Nevertheless, not my will, but yours, be done.” Luke 22:42

WORLD ECONOMIC FORUM 2024 AGENDA & AI

Sam Altman is looking ahead to Artificial General Intelligence (AGI) — and a quiet life

The CEO of OpenAI made a big splash this year in Davos, where AI was referenced in nearly every session. As part of the official WEF agenda, Altman sat on a panel in Congress Hall, the event’s largest meeting venue, with the CEOs of corporate heavyweights Accenture, Pfizer, and Salesforce, along with British finance minister Jeremy Hunt. On the conference sidelines, at a lunch hosted by Salesforce co-founder Marc Benioff, Altman was the surprise guest speaker — and the victim of a minor grilling by Benioff about his personal goals for the next 10 years.

“In our industry, people are always overestimating [what they can accomplish] in a year but underestimating what they’re able to do in one decade and certainly in a couple of decades,” Benioff said. “So tell me, in one decade, where is your mind? What would you like to have under your belt, fully accomplished? Where do you see yourself? Where would you be living? What would you be doing? … What is going on? Give us the full picture now.”

After the laughter died down, the recently married Altman replied, “I think the definition of Artificial General Intelligence (AGI) has become so fuzzy, it’s sort of irrelevant. But I think in a decade we will have made something that most people will consider AGI. I hope to be working on that, watching this wonderful world unfold, living on our ranch, raising our kids, having as quiet of a life as I can.”

Somebody needs to tell Altman that the only way he can have a peaceful life is when he acknowledges God as our Creator, fears Him, and accepts His offer of salvation by accepting Jesus as his Lord and Saviour.

God has told us what will unfold in the years ahead and it is not peaceful. It will be a time of escalating tribulation for Christians and Jews that the world has never experienced before. God will use the persecution to purify His church. Already we are seeing much of the institutional church, falling away, compromising with the world on issues such as homosexual pastors, gay marriage, and transgenderism.

God gives us a picture of the apostate church. It is the Church of Laodicea (Revelation 3:14-22). Jesus comes to rapture His church (Church of Philadelphia – Revelation 3:8-13) before God pours out His wrath upon an unrepentant world with the Trumpet (Revelation 8&9) and Bowl judgements (Revelation 16).

Our unrepentant family and friends need to be warned of what they are facing if they do not repent of their rebellion against God and accept His offer of salvation and eternal life achieved by God’s son, Jesus.

For our sake, he (God the Father) made Him (Jesus Christ – God the Son) to be sin who knew no sin so that in Him (Jesus) we might become the righteousness of God.2 Corinthians 5:21

The judgements escalate as we proceed through the Trumpet and Bowl judgements. Let me show you what people on Earth face at the fifth Trumpet judgement.

And the fifth angel blew his trumpet, and I saw a star fallen from heaven to earth, and he was given the key to the shaft of the bottomless pit… Then from the smoke came locusts on the earth, and they were given power like the power of scorpions of the earth. They were told not to harm the grass of the earth or any green plant or any tree, but only those people who do not have the seal of God on their foreheads. They were allowed to torment them for five months, but not to kill them, and their torment was like the torment of a scorpion when it stings someone. And in those days people will seek death and will not find it. They will long to die, but death will flee from them.Revelation 19:1, 3-6

This is what is called apocalyptic evangelism. Using the many end times Biblical prophecies to show the truth of God’s Word and the consequences of not fearing the God who created them in His image to be in relationship with Him.