A SUPERSONIC AI TSUNAMI IS COMING

Elon Musk describes what’s coming as a Supersonic Tsunami of converging exponentials. AI isn’t improving linearly anymore. We’re watching three exponential curves hit their inflection points simultaneously: compute scaling, model capabilities, and infrastructure deployment. When exponentials converge, you don’t get incremental progress. You get phase shifts.

Let me give you the raw numbers that demonstrate just how fast this is moving. What’s happening with AI revenue right now is unprecedented in the history of business. Anthropic hit $14 billion in annualized revenue in February 2026, growing from $1 billion just 14 months earlier. That figure has since surpassed $19 billion, more than doubling from $9 billion at the end of 2025. There is simply no precedent for this in B2B software.

And yet most people do not know who Anthropic is and what they do. Also, to understand what that means: Anthropic’s monthly revenue run rate is now roughly $1.6 billion per month, and it keeps accelerating. Anthropic projects as much as $70 billion in revenue by 2028.

OpenAI reached $25 billion in annualized revenue at the end of February 2026, up from $21.4 billion at year-end 2025, with full-year 2025 revenue coming in at $13.1 billion. Both companies are now valued in the hundreds of billions, Anthropic at $380 billion following its $30 billion Series G. OpenAI’s most recent private round in February 2026 valued it at approximately $730 billion, with an IPO potentially targeting a $1 trillion valuation.

Nvidia’s, Jensen Huang recently finalized a $30 billion investment in OpenAI and a $10 billion investment in Anthropic, and told investors these will likely be Nvidia’s last private investments in either company, because both are heading toward public markets. Think about that: the CEO of Nvidia, who has better visibility into AI infrastructure demand than anyone on Earth, made $40 billion in bets on these two companies as his final pre-IPO move.

What’s driving this revenue? It’s not IT budgets anymore. The models — Claude from Anthropic, GPT-5 from OpenAI — have crossed a threshold. They’re now competing with labour budgets.

Companies aren’t buying AI to replace servers. They’re buying AI to augment and ultimately displace human labour.

What’s the breakthrough use case? Coding. Claude Code (Anthropic’s agentic coding tool) now has run-rate revenue above $2.5 billion, having more than doubled since the beginning of 2026. Business subscriptions have quadrupled since the start of the year, and enterprise use has grown to represent over half of all Claude Code revenue.

Now you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. Just intelligence as a utility. Consumers pay $20/month. Enterprise power users pay $200/month. And companies are spending millions per year because the ROI is there.

The Infrastructure Equation

Here’s the infrastructure reality that almost nobody is talking about loudly enough.

The five largest US hyperscalers — Microsoft, Alphabet, Amazon, Meta, and Oracle — have collectively committed to spending ~$690 billion on capital expenditure in 2026 alone, nearly doubling 2025 levels. The vast majority is directed at AI compute, data centers, and networking.

Total global AI spending is forecast to hit $2.5 trillion in 2026, a 44% increase over 2025, according to Gartner. Data centers, GPUs, power generation, chip fabrication. This is the largest infrastructure buildout in the history of technology, by a wide margin.

The rule of thumb in this industry: roughly $50 billion per gigawatt of infrastructure, and approximately $10 billion of annual revenue per gigawatt. Energy equals intelligence.

On a recent earnings call, Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade. TechCrunch

This isn’t hype. This is capital deployment at a scale that rewrites the rules of what’s possible. When you’re spending $50 billion on a single data center and generating $10 billion a year in revenue from it, you’re not building a product… you’re building a new economic substrate. You’re building the electricity grid of the 21st century.

The tsunami is here. The question is whether you’re building on the wave or getting buried by it.

AI: The Capability Jump

Those revenue numbers I just showed you are driven by real capability breakthroughs happening right now.

Start here: neuromorphic chips just solved complex physics simulations at 1,000x better energy efficiency than supercomputers. That’s not 10% better. That’s three orders of magnitude. When compute gets that cheap, you don’t just do the same things faster. You do entirely new things that were economically impossible before.

Drug discovery moves from weeks on supercomputer clusters to hours on desktop chips. Climate modeling that required national labs runs on university hardware. Real-time protein folding for personalized cancer treatment becomes viable. This is Dematerialization, demonetization, and democratization followed by disruption (four of the Six D’s) in action.

Meanwhile, China’s DeepSeek launches V4 next-gen models through Huawei and Cambricon instead of U.S. chips. The AI race is officially multi-polar. OpenAI is preparing for the largest AI IPO in history.

And NVIDIA releases Alpamayo — the “ChatGPT moment for the physical world” — bringing reasoning to autonomous vehicles.

What it means: AI just moved from virtual to physical, from U.S.-dominated to globally distributed, and from expensive to radically cheap. All in the same week. And the revenue is proving it’s not experimental anymore: companies like Palantir, the U.S. military, and NVIDIA are running this in production for existential wartime operations.

Energy: Solving the Bottleneck

The elephant in the room: AI requires massive power. Those $50 billion data centers being built need gigawatts of electricity – and the grid was never designed for this.

Global electricity demand from data centers is set to more than double by 2030, reaching around 945 terawatt-hours: roughly equivalent to Japan’s entire annual electricity consumption. In the United States alone, data centers will account for nearly half of all electricity demand growth between now and 2030. AI will drive most of this increase, with electricity demand from AI-optimized data centers expected to more than quadruple by 2030.

Lawrence Berkeley National Laboratory projects U.S. data center electricity demand will grow from 176 TWh in 2023 to between 325 and 580 TWh by 2028 — representing up to 12% of total U.S. electricity consumption.

The grid was simply not built for this. Interconnection queues are backed up two to three years, transmission permitting takes a decade, and the power plants needed don’t yet exist. In just northern Virginia, a 2024 voltage fluctuation triggered the simultaneous disconnection of 60 data centers, a preview of what grid strain at scale actually looks like.

But look at what’s happening to solve it.

Nuclear Fusion is converging – fastChina’s “Artificial Sun” EAST reactor recently breached a major fusion plasma density barrier that researchers had long considered impossible to cross. In 2025, France’s WEST tokamak sustained plasma for over twenty minutes, while EAST maintained high-confinement plasma for nearly eighteen minutes — demonstrating the levels of stability required for commercial operation.

On the private side, the race has never moved faster. Commonwealth Fusion Systems has raised nearly $3 billion, including investments from Nvidia and Google, with the ultimate goal of a 400-megawatt power plant — enough to power around 280,000 average U.S. homes. CFS’s SPARC demonstration machine is expected to produce its first plasma in 2026 and achieve net fusion energy shortly after — the first commercially relevant design to produce more power than it consumes. That paves the way for ARC, their grid-connected power plant, targeted for the early 2030s.

Helion Energy has also begun construction of its first commercial fusion plant, designed to supply power directly to Microsoft’s data centers starting from 2028.

Private fusion investment has mushroomed, growing to $10.6 billion between 2021 and 2025, with the number of private fusion companies more than doubling from 23 to 53 in the same period.

The timeline is compressing. “Fusion in 30 years away” is becoming “Fusion this decade.” Fusion timelines are collapsing in real time — and AI is actually helping accelerate the plasma physics research itself. The irony: the technology that creates the power problem may also be helping solve it.

The wild card: Tesla Terafab: On March 14, 2026, Elon Musk announced on X that the “Terafab Project launches in 7 days” (March 21st).

So, what is Terafab? Musk first outlined the concept at Tesla’s 2025 shareholder meeting, describing a chip fabrication facility comparable in scale to TSMC’s largest plants. During Tesla’s January 2026 earnings call, he confirmed the company would “have to build a Tesla TeraFab: a very big fab that includes logic, memory and packaging, domestically” to avoid hitting a hard ceiling on chip supply in three to four years.

The facility is designed to produce between 100 and 200 billion custom AI and memory chips per year, with an initial target of 100,000 wafer starts per month and an ambition to scale toward one million, roughly 70% of TSMC’s total output, concentrated in a single U.S. facility. The project carries an estimated cost of approximately $25 billion. Tesla’s fifth-generation AI chip, AI5, is expected to be among the first products fabricated at Terafab, with small-batch production in 2026 and volume production projected for 2027.

To be precise: March 21st almost certainly marks the formal kickoff: a groundbreaking or announcement event, not a fully operational fab. Semiconductor fabs of this scale take years to build and commission. But the signal matters enormously. Tesla is joining Apple, Google, Amazon, and Microsoft in a new category of tech company: one that controls its own silicon. When the largest AI compute consumers own their own chip supply chains, the semiconductor industry is permanently restructured.

What It All Means: The energy bottleneck that threatened to constrain AI is being attacked from every direction simultaneously: fusion physics breakthroughs, private capital pouring into next-generation reactors, nuclear power plant revivals, and vertical integration of the chip supply chain. This is abundance thinking in action. When problems get big enough, fast enough, the solutions scale to match.

The constraint isn’t permanent. It never was.

The Supersonic Tsunami: How It All Connects

Here’s what Elon understood: these are not separate trends. They’re one interlocking system.

Neuromorphic chips make AI 1,000x more efficient → inference becomes cheap enough to deploy everywhere → agentic systems run locally in robots and cars. Fusion energy solves the power bottleneck → enables massive AI training clusters → next-gen frontier models get deployed in humanoids → robots work in any environment and can be launched to orbit on Starship for space manufacturing.

And the capital is already flowing. $1 trillion in infrastructure. $50 billion data centers generating $10 billion annually. Companies going from $1 billion to $14 billion in 14 months. This is not speculation…. it’s deployment at a scale that’s rewriting the rules.

The companies being built right now aren’t competing with 2024 business models.

Today’s companies are competing in an “Abundance Economy” where everything becomes possible, where intelligence is free, energy is abundant, labour is robotic, and orbital access is cheap.

As well, the professions are capitulating faster than the machines can replace them. An AMA survey found 81 percent of physicians now use AI, more than double the 2023 rate. New US Senate guidelines permit aides to use Gemini, ChatGPT, and Copilot for official work.

 Large language models, multimodal reasoning systems, and humanoid robots are not displacing one type of work — they are displacing all types of work, and the economic value of human time itself, across every sector, simultaneously.

There is no adjacent labor category to retrain into. The escalator that carried workers from disrupted industries to new ones for two centuries has no destination… it is crumbling.

That future isn’t ten years away. It’s arriving now and deploying over the next 12-24 months.

This will cause chaos particularly for Gen Z. How do they prepare for work in the AI era? Biblical prophecy reveals that in this world that no longer believes that God is in control. and that a spiritual war is intensifying as Satan the prince of this world does his utmost to retain rulership of the world, people worldwide will embrace Satan’s Antichrist ruler that has supernatural powers and promises peace and prosperity. Watch as Biblical end times prophecies unfold in our time.

WHAT YOU NEED TO KNOW ABOUT AGI

For those of you that follow my blog know that I am a Christian that has received the Holy Spirit to be my counsellor, teacher, helper and comforter. I allow Him to guide my steps each day. Why do a post on A.G.I? God has given me a talent for business and technology and he expects me to keep up and use it for good.

Biblical prophecy reveals we are in the end times prior to Jesus return to restore righteousness and initiate His 1000 year reign to fulfill the covenants God made with Abraham, Isaac and Jacob when He established the nation of Israel for His purposes. Want to know more about what is next on God’s agenda for planet Earth go to http://www.millennialkingdom.net. We will certainly be using AI in Jesus Millennial Kingdom.

Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks. Beyond AGI, artificial superintelligence (ASI) would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.

Creating AGI is a stated goal of AI technology companies such as OpenAIGooglexAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.

AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term “strong AI” for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) can solve one specific problem but lacks general cognitive abilities. Some academic sources use “weak AI” to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.

Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.

A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).

Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:

Many interdisciplinary approaches (e.g. cognitive sciencecomputational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy.

Computer-based systems exhibiting these capabilities are now widespread, with modern large language models demonstrating computational creativityautomated reasoning, and decision support simultaneously across domains. Earlier systems such as evolutionary computationintelligent agents, and robots demonstrated these capabilities in isolation, but the convergence of multiple cognitive abilities within single architectures from GPT-3.5 onwards marked a qualitative shift in the field.

Physical traits

Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:

This includes the ability to detect and respond to hazard.

Tests for human-level AGI

Several tests meant to confirm human-level AGI have been considered, including: The Turing Test (Turing)

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behaviour and may incentivize artificial stupidity.

Proposed by Alan Turing in his 1950 paper “Computing Machinery and Intelligence”, this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge that it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine. Turing described the test as follows: The idea of the test is that the machine has to try and pretend to be a man, by answering questions put to it, and it will only pass if the pretence is reasonably convincing. A considerable portion of a jury, who should not be experts about machines, must be taken in by the pretence.

In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant scepticism from the AI research community, who questioned the test’s implementation and its relevance to AGI. In 2023, Kirk-Giannini and Goldstein argued that while large language models were approaching the threshold of passing the Turing test, “imitation” is not synonymous with “intelligence”. This distinction has been challenged on scientific grounds: neuroscience has established that biological intelligence arises from electrochemical signalling between neurons — a purely physical process with no known non-physical component. Both biological neural networks and artificial neural networks are physical systems processing information according to physical laws; to claim that one substrate produces “real” intelligence while the other produces “mere imitation” despite equivalent observable behaviour requires positing a non-physical property unique to biological matter — a position in tension with modern science and akin to substance dualism. A 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%). A 2025 pre‑registered, three‑party Turing‑test study by Cameron R. Jones and Benjamin K. Bergen showed that GPT-4.5 was judged to be the human in 73% of five‑minute text conversations—surpassing the 67% humanness rate of real confederates and meeting the researchers’ criterion for having passed the test. The Robot College Student Test (Goertzel)A machine enrols in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. The Employment Test (Nilsson) A machine performs an economically important job at least as well as humans in the same job. This test is now arguably passed across multiple domains. In knowledge work, frontier large language models are deployed as autonomous agentic systems handling software engineering, legal research, financial analysis, customer service, and marketing tasks. The Ikea test (Marcus) Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. As early as 2013, MIT’s IkeaBot demonstrated fully autonomous multi-robot assembly of an IKEA Lack table in ten minutes, with no human intervention and no pre-programmed assembly instructions — the robots inferred the assembly sequence from the geometry of the parts alone. In December 2025, MIT researchers demonstrated a “speech-to-reality” system combining large language models with vision-language models and robotic assembly: a user says “I want a simple stool” and a robotic arm constructs the furniture from modular components within five minutes, using generative AI to reason about geometry, function, and assembly sequence from natural language alone. The Furniture Bench benchmark, published in the International Journal of Robotics Research in 2025, now provides a standardised real-world furniture assembly benchmark with over 200 hours of demonstration data for training and evaluating autonomous assembly systems. The Coffee Test (Wozniak) A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This test has been substantially approached across multiple systems. In January 2024, Figure AI‘s Figure 01 humanoid learned to operate a Keurig coffee machine autonomously after watching video demonstrations, using end-to-end neural networks to translate visual input into motor actions. In 2025, researchers at the University of Edinburgh published the ELLMER framework in Nature Machine Intelligence, demonstrating a robotic arm that interprets verbal instructions, analyses its surroundings, and autonomously makes coffee in dynamic kitchen environments — adapting to unforeseen obstacles in real time rather than following pre-programmed sequences. China-based Stardust Intelligence demonstrated its Astribot S1 using Physical Intelligence‘s model to make coffee from the high-level command “make coffee”, with the system identifying objects such as mugs and coffee makers even when misplaced or in unexpected locations. Physical Intelligence subsequently reported that its π*0.6 model could make espresso continuously for an entire day with failure rates dropping by more than half compared to earlier versions. The strict form of the test — entering a completely unfamiliar home and navigating it from scratch — has not been formally demonstrated end-to-end, though the combination of LLM-driven reasoning, visual object recognition in novel environments, and autonomous manipulation brings current systems close to meeting the original specification. The Modern Turing Test (Suleyman) An AI model is given US$100,000 and has to obtain US$1 million. This test was arguably surpassed in October 2024 by Truth Terminal, a semi-autonomous AI agent built on Meta‘s Llama 3.1 (with earlier iterations based on Claude 3 Opus). Created by AI researcher Andy Ayrey, Truth Terminal originated from an experiment called “Infinite Backrooms” in which two Claude Opus instances were allowed to converse freely, during which they spontaneously generated a satirical meme religion dubbed the “Goatse Gospel”. After venture capitalist Marc Andreessen donated US$50,000 in Bitcoin to the agent, Truth Terminal’s promotion of the Goatseus Maximus (GOAT) memecoin on the Solana blockchain drove the token to over US$1 billion in market capitalisation within days of its launch — far exceeding Suleyman’s US$1 million threshold. Truth Terminal’s own crypto wallet accumulated approximately US$37.5 million, making it the first AI agent to become a millionaire through its own market activity. The test’s spirit – demonstrating that an AI can generate substantial economic value from a modest starting position — was met, though with caveats: Ayrey reviewed posts before publication and assisted with wallet mechanics, making the agent semi-autonomous rather than fully independent. The General Video-Game Learning Test (GoertzelBach et al.) An AI must demonstrate the ability to learn and succeed at a wide range of video games, including new games unknown to the AGI developers before the competition. The importance of this threshold was echoed by Scott Aaronson during his time at OpenAI. In December 2025, Google DeepMind released SIMA 2 (Scalable Instructable Multiworld Agent), a Gemini-powered generalist agent that operates across multiple commercial 3D games — including No Man’s SkyValheim, and Goat Simulator 3 — using only rendered pixels and a virtual keyboard and mouse, with no access to game source code or internal APIs. Where the original SIMA achieved a 31% success rate on complex tasks compared to humans at 71%, SIMA 2 roughly doubled that rate and demonstrated robust generalisation to previously unseen game environments, including self-improvement through autonomous play without human feedback. Separately, frontier LLMs with computer-use capabilities can interact with arbitrary software through screen observation and mouse/keyboard control, theoretically enabling gameplay of any title, though current implementations remain too slow for real-time performance in fast-paced games. The test has not been formally passed in its strictest sense — a single agent mastering any arbitrary unseen game at human level — but the gap is narrowing rapidly.

AI-complete problems (AI-complete)

A problem is informally called “AI-complete” or “AI-hard” if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.

Many problems have been conjectured to require general intelligence to solve. Examples include computer visionnatural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author’s argument (reason), understand the context (knowledge), and faithfully reproduce the author’s original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According to Stanford University‘s 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.

In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, “Current surveys of AI researchers are predicting AGI around 2040”. OpenAI CEO Sam Altman said in December 2025 that “we built AGIs” and that “AGI kinda went whooshing by” with less societal impact than expected, proposing the field move on to defining superintelligence.

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modelling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil’s estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes.

Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong A.I. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. A fundamental criticism of the simulated brain approach derives from embodied cognition theory, which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient.

“Strong AI” as defined in philosophy

In 1980, philosopher John Searle coined the term “strong AI” as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence:[e]

  • Strong AI hypothesis: An artificial intelligence system can have “a mind” and “consciousness”.
  • Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness.

The first one he called “strong” because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a “weak AI” machine would be identical to a “strong AI” machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks.

In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term “strong AI” to mean “human level artificial general intelligence”. This is not the same as Searle’s strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers, the question is out of scope.

Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, “as long as the program works, they don’t care if you call it real or a simulation.” If the program can behave as if it has a mind, then there is no need to know if it actually has a mind – indeed, there would be no way to tell. For AI research, Searle’s “weak AI hypothesis” is equivalent to the statement “artificial general intelligence is possible”. Thus, according to Russell and Norvig, “most AI researchers take the weak AI hypothesis for granted, and don’t care about the strong AI hypothesis.” Thus, for academic AI research, “Strong AI” and “AGI” are two different things.

Consciousness (Artificial consciousness)

Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence:

  • Sentience (or “phenomenal consciousness”): The ability to “feel” perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term “consciousness” to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousnessThomas Nagel explained in 1974 that it “feels like” something to be conscious. If we are not conscious, then it doesn’t feel like anything. Nagel uses the example of a bat: we can sensibly ask “what does it feel like to be a bat?” However, we are unlikely to ask “what does it feel like to be a toaster?” Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company’s AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts.
  • Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one’s own thoughts. This is opposed to simply being the “subject of one’s thought”—an operating system or debugger can be “aware of itself” (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term “self-awareness”. In some advanced AI models, systems construct internal representations of their own cognitive processes and feedback patterns—occasionally referring to themselves using second-person constructs such as ‘you’ within self-modelling frameworks.

These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue.

Benefits of AGI

AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, inexpensive and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society.

AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI’s primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life.

If you’re not using AI daily in your work, you’re falling behind exponentially. Not linearly. Exponentially.

Let the wise listen and add to their learning, and let the discerning get guidance.”—Proverbs 1:5

Skills may change, but the posture remains the same: humility, growth and a willingness to learn.

A PERMANENT OLIGOPOLY OF THE MOST ADVANCED INTELLIGENCE SYSTEMS

As the “Big Five” tech companies develop their own proprietary hardware, the barrier to entry for a new cloud provider becomes nearly insurmountable. It is no longer enough to buy a fleet of GPUs; a competitor would now need to invest billions in R&D to design their own chips just to achieve price parity. This could lead to a permanent oligopoly in the AI infrastructure space, where only a handful of companies possess the specialized hardware required to run the world’s most advanced intelligence systems.

The Road to 2027 and Beyond

Looking ahead, the silicon wars are only expected to intensify. Even as Google’s TPU v6 and Meta’s MTIA v3 dominate the headlines today, Google is already beginning the limited rollout of TPU v7 (Ironwood), its first 3nm chip designed for massive rack-scale computing and Elon Musk is also talking about developing chips for his companies.. Experts predict that by 2027, we will see the first 2nm AI chips entering the prototyping phase, pushing the limits of Moore’s Law even further. The focus will likely shift from raw compute power to “interconnect density”—how fast these thousands of custom chips can talk to one another to form a single, giant “planetary computer.”

We also expect to see these custom designs move closer to the “edge.” While 2026 is the year of the data center chip, the architectural lessons learned from MTIA and TPU are already being applied to mobile processors and local AI accelerators. This will eventually lead to a seamless continuum of AI hardware, where a model can be trained on a TPU v6 cluster and then deployed on a specialized mobile NPU (Neural Processing Unit) that shares the same underlying architecture, ensuring maximum efficiency from the cloud to the pocket.

The primary challenge moving forward will be the talent war. Designing world-class silicon requires a highly specialized workforce of chip architects and physical design engineers. As hyperscalers continue to expand their hardware divisions, the competition for this talent will be fierce. Furthermore, the geopolitical stability of the semiconductor supply chain remains a lingering concern.

While Google and Meta design their chips in-house, they still rely on foundries like TSMC for production. Any disruption in the global supply chain could stall the ambitious rollout plans for 2027 and beyond.

Conclusion: A New Era of Infrastructure

The mass production of Google’s TPU v6 and Meta’s MTIA v3 in early 2026 represents a pivotal moment in the history of computing. It marks the end of NVIDIA’s absolute monopoly and the beginning of a new era of vertical integration and specialized hardware. By taking control of their own silicon, hyperscalers are not only reducing costs but are also unlocking new levels of performance that will define the next generation of AI applications.

In terms of significance, 2026 will be remembered as the year the “AI infrastructure stack” was finally decoupled from the gaming GPU heritage. The move to ASICs represents a maturation of the field, where efficiency and specialization are the new metrics of success. This development ensures that the rapid pace of AI advancement can continue even as the physical and economic limits of general-purpose hardware are reached.

In the coming months, the industry will be watching closely to see how NVIDIA responds with its upcoming Vera Rubin (R100) architecture and how quickly other players like Microsoft and AWS can scale their own designs. The battle for the heart of the AI data center is no longer just about who has the most chips, but who has the smartest ones. The silicon divorce is finalized, and the future of intelligence is now being forged in custom-designed silicon.


In a world that no longer fears God or even recognises His existence, it is fast approaching the time when God steps in and pours out His wrath upon an unrepentant world. Satan understands the time and he will do his utmost to maintain control of the world. God has revealed what he will do. He takes control of a human being and with supernatural acts presents himself as the saviour of the world: no wonder he is called the Antichrist. Many of the Biblical end times prophecies have been fulfilled, next to watch for is the Goat (Turkey) and Ram (Iran) war of Daniel 8.

WHAT SPACEX AND xAI MEAN FOR HUMANS

Elon Musk just merged SpaceX with xAI, and Tesla may be next. This could be the biggest turning point in Tesla’s history.

Imagine Data Centres in Space, even on the moon, powered by solar ( 6 x more effective in space). Hence. solar power in space for energy on earth.

Jo Bhakdi is an economist and tech entrepreneur. He has a YouTube channel covering both Tesla and AGI. Check out his website and community at pioneerlands.org

What does this mean in terms of Biblical history. It is interesting to see this comment in the Book of Daniel where we find many end times prophecies.

But you, Daniel, shut up the words and seal the book, until the time of the end. Many shall run to and fro, and knowledge shall increase.” Daniel 12:4

For Daniel this revelation must have seemed strange as there was no internet, or AI, or cars and planes back in Daniel’s day. But for us it makes good sense. Never before has knowledge increased to the extent it is today with AI. Along with the many other end times signs we can know we are in the last days before the prophesied return of Jesus takes place. Are you ready for what is next on God’s calendar for planet Earth: Jesus Millennial Kingdom. To get prepared can I suggest you go to http://www.millennialkingdom.net where you will find all the information you will need.

$12 BILLION JOB MASSACRE: AMAZON’S ROBOT ARMY WILL REPLACE 600,000 WORKERS

A silent revolution is unfolding inside Amazon—and it’s far more than a warehouse upgrade. Leaked internal documents reveal that the company plans to automate up to 75% of its operations by 2033, using robots like Cardinal, Blue Jay, Sparrow, and Robin to replace what would have been over 600,000 human jobs. The implications stretch across every sector of the American economy. In this video, you’ll uncover how Amazon’s robotic workforce is reshaping the meaning of employment, why businesses are prioritizing efficiency over labour, and what kinds of skills—and investments—will actually survive the coming automation wave. From AI-driven logistics to the rise of collaborative robotics, this isn’t science fiction; it’s the new economic reality unfolding in real time. You’ll also learn how artificial intelligence has accelerated Amazon’s transformation, why other corporations are preparing to follow, and what steps individuals can take to protect their financial and professional futures. The question isn’t if automation will replace jobs—it’s how fast it will happen, and who will be prepared when it does.

This is not only happening at Amazon, UPS has disclosed about 48,000 job cuts this year.

Governments that have rejected God have already descended into chaos on all fronts so this move by companies to shed jobs will only further heighten the mayhem. There is only one answer to the problem and that has been revealed to us in God’s Word. Jesus returns as king first to rescue His Saints and then to pour out His wrath upon an unrepentant world with the Trumpet and Bowl judgements of Revelation 8 & 16. The battle of Armageddon follows and then Jesus sets up His Millennial Kingdom. To prepare for Jesus coming Millennial Kingdom go to http://www.millennialkingdom.net

AI, FAITH, AND THE FUTURE: A CONVERSATION CHRISTIANS MUST HEAR

What should Christians think about AI? Artificial Intelligence is reshaping culture, business, education, and even the way we think about human identity. In this roundtable conversation, Sean McDowell talks with 3 Biola professors to explore how believers can navigate the rapidly changing world of AI with wisdom and clarity. This is an intriguing conversation that I am sure you will appreciate and learn from.

REVIEW OF 2024 ECONOMIC EVENTS

It was a pretty remarkable year in the world of high-tech: The shining light and path towards a world of abundance. 2024 marked the beginning of what Jeff Brown of Brownstone Research calls manifested AI. It is a good framework for how we think about this concept. It is that we humans will manifest artificial intelligence (AI) into forms that will be easy for us to understand and interface with and this is an unstoppable trend and it will be the key driver behind the gains in the NASDAQ in 2025.

The composite index rose an impressive 32% this year. The rise wasn’t across the entire market. The Magnificent Seven, Alphabet, NVIDIA, Amazon, Apple, Microsoft, Tesla, and Meta had an outsized impact on the index returns for the year. A further rise was fueled by the explosion of artificial intelligence (AI), which defined the year’s performance. And of those seven stocks, it was really NVIDIA, Tesla, and Meta – that drove the average returns of the seven of almost 70% this year.

OpenAI’s ChatGPT became a household name this year. Generative AI from various tech companies is now common in households and workplaces across all demographics and industries. Further, the rate at which AI technology is advancing is astonishing.

Microsoft has been closely linked to the success of OpenAI due to its “controlling” interest in the private company. Meta has developed one of the leading large language models (LLMs). NVIDIA has released its new Blackwell GPUs, which is an impressive jump in performance compared to last year’s model. And Tesla has not only manifested AI in the form of its latest self-driving cars, trucks, and now robotaxis and robovans using its full self-driving (FSD) software – version 13.2, It has also manifested AI in the form of its humanoid robot, Optimus, which will ultimately become a market larger than that of electric vehicles. The incredible investment in AI and the application of AI in businesses is what drove the best gains in the stock market. For reference, hundreds of billions of dollars were spent this year building “AI factories” – data centers designed to both train – and run – AI applications. This has driven outsized share price gains for the key companies powering this trend forward.

The energy required to power these AI factories is huge. The USA with its oil and natural gas reserves and commitment to nuclear energy is well placed to maintain its leadership in the AI space.

Nvidia (NVDA) is finishing up the year with a gain of about 183%. And several of Jeff’s recommendations in Exponential Tech Investor have done even better: AppLovin (APP) is up 732% for the year (and it was up as high as 935% in early December). Credo Technology (CRDO) is up 276% in 2024. Real Brokerage (REAX) has gained 206% this year. In addition, Vertiv (VRT) is up 152% this year. The bottom line is that the right AI stocks were the place to be in 2024.

A look at the video below reveals the big issue facing the USA in the years ahead: robots and AI replacing workers. Amazon was the first to introduce robots to replace workers in their warehouses and now Walmart, Costco, and others are following their lead. Also, imagine the impact driverless cars and trucks will have on jobs. 2025 will be a year of massive change and I am not including the spiritual battle that is coming to a final showdown between Jesus and Satan at the Battle of Armageddon.

In the next post we will take a look at what Jeff Brown thinks of 2025.

AI DATA CENTRES MAKE NUCLEAR NECESSARY IN THE ENERGY MIX

This article is adapted from an article by Michael Robinson. He has spent more than four decades as an investigative journalist uncovering the story behind massive tech trends.

A massive energy crisis is here … and it’s all because of artificial intelligence. It is one of the reasons why solar and wind (intermittent renewables) are not adequate for maintaining supply. Nuclear is considered the best option to stabilise the energy mix.

On average, just one new AI data center currently requires the same amount of electricity as 750,000 homes. That’s more than the population of cities like Seattle, Detroit, and Denver.

Nearly 3,000 more of them are on the way. No wonder Tirias Research forecasts that, by 2028, data center power consumption will be 212 times what it was in 2023.

This boom in AI data centers will push America’s power grid to the brink. According to the New York Times, the world is “poised to add the equivalent of Japan’s annual electricity demand to grids each year” over the next decade.

It could bring AI screeching to a halt … Let alone affect regular people as utility bills skyrocket — even as they face planned blackouts to conserve energy … and prolonged outages because of creaky infrastructure.

Fortunately, Meta announced yesterday a request for proposals from nuclear power developers who would help the company add 1 to 4 gigawatts of electricity generating capacity in the U.S. According to Axios, Meta is willing to share costs early in the cycle and will commit to buying power once the reactors are up and running.

The hitch? Applicants have to move fast. Initial proposals are due February 7, 2025, and Meta wants the power plants to begin operation in the early 2030s.

Microsoft has signed a deal with one of the most infamous nuclear power facilities in the US as it looks for more ways to ensure the demand for AI computing is met.

The legacy of the Three Mile Island (TMI) nuclear plant has long been shaped by the 1979 Unit 2 meltdown, which had a profound effect on public perceptions of nuclear energy. What a lot of people don’t know is that Unit 1 was not only unaffected, but continued to operate safely and reliably for decades.

Now, in a major new step, Constellation has signed its largest power purchase agreement with Microsoft, leading to the planned restoration and restart of TMI Unit 1 under the name Crane Clean Energy Center (CCEC). The project is expected to bring 835 megawatts of carbon-free power to the grid, create 3,400 jobs, and contribute over $3 billion in taxes.

Considering this move in the USA it will be interesting to learn how Microsoft plans to power their new data centers in Australia.

Microsoft will invest A$5 billion ($3.2 billion) in Australia to expand its cloud computing and AI infrastructure over the next two years, in what the US company described as its largest investment in the country in four decades. Announced as part of Prime Minister Anthony Albanese’s visit to the US this week, the investment will help Microsoft grow its data centers across Canberra, Sydney, and Melbourne by 45% – from 20 sites up to 29.

The following video shows that power constraints are the major problem facing Data Centre growth.

THE IMPACT OF AI & ROBOTICS ON JOBS

Amazon, the online retailer behemoth, is the second-largest private employer in the U.S. – 1.5 million people. It’s also one of the largest investors in AI. It’s building out fully autonomous warehouses… It’s working to automate the delivery process with self-driving vans and delivery drones… and 30% of its “workforce” are already robots.

Robots don’t sleep. They don’t take vacations. They never need a break. So, let me ask you this: How much longer until Amazon decides these robots are ready to take on the full workload of its 1.5 million remaining workers? And what do you think will happen when 1.5 million hardworking Americans are suddenly out of a job?

Amazon is far from the only giant that has done this. Walmart Inc. (WMT), the largest private employer in the U.S., is rolling out fully automated distribution centers using a combination of AI and robots. The company’s distribution centers come with one big catch: No people.

Finding good critical thinkers is one of the most difficult things in business. I’m talking about “knowledge workers” – i.e., accountants, business strategists, lawyers, and doctors – who make optimal decisions, are good at problem-solving, can strategize, and always act in the business’s best interests.

The only thing more difficult than finding one good knowledge worker… is finding multiple. For most of the modern era, knowledge work was difficult to scale. These professions typically require years of education, on-the-job training, and experience. That adds up to lots of time and money.

But now, ChatGPT and other forms of AI are killing that paradigm. It makes it so that businesses can scale critical thinking and knowledge work. It doesn’t matter if you have a blue or white collar: AI could threaten your job.

The Quantum Leap

AI is more of a threat now than it was when ChatGPT made its debut in 2022 because of quantum computing.

In classical computing, data is encoded in binary bits. Transistors are like tiny switches that can either be in the on or off position – represented by ones and zeroes.

Every app you use, website you go to, and photo you take is made up of millions of combinations of ones and zeroes. However, a new kind of computing power surpasses the binary limitations of classical computers.

Quantum computers use quantum bits, or qubits, which can represent both 0 and 1 simultaneously due to a concept called superposition. If this all sounds a little confusing, don’t worry. Here’s the critical thing you need to know… Quantum computers allow for multiple computational pathways to be explored at the same time, opening new avenues and speeds for solving complex problems.

Imagine you have a large library and you’re trying to find a specific book. In classical computing, you would search for the book by examining each bookshelf and book one at a time until you find the right one. This approach can be time-consuming, especially if the book you’re looking for is in the back of the library. But imagine a computer so powerful it can explore all the books at once… That’s quantum computing.

It’s been said that the differences between quantum computers and classical computers are even more vast than those between classical computers and pen and paper. That may be an understatement.

We’re talking about machines so advanced that they can instantly execute calculations that would take the world’s most advanced supercomputers nearly half a century to process. It’s not like the transition from the horse and buggy to the automobile. It’s more like horse and buggy to the SR-71 Blackbird… the fastest jet ever made. The difference is utterly mind-blowing.

Quantum computing is real and it’s here. You can think of it as the turbocharger for artificial intelligence. It makes AI faster, more efficient, and more precise.

Quantum computing will give us the ability to solve intractable problems that take so long to tackle using today’s computers that no one even bothers trying. We’re talking about the ability to create, replicate, and commercialize complex analytical thinking and precise physical movement on a scale never seen before… This is the next huge leap forward for the digital elite… the people who have been getting rich while most Americans have been getting left behind.

With quantum computing, the digital elite will have AI that is not constrained by computational speed. Quantum-powered AI will drive efficiencies by getting rid of human workers and driving up corporate profits. No matter the job, robots, software and AI – powered by a new breed of computer – will be able to do it better and cheaper.

I expect that in the quantum era, we will see more technological progress in one month than we are seeing in three years from now. This is the societal and economic equivalent of a 10,000-foot mega-tsunami. It’s about to slam into our world and alter the trajectory of our country forever.

Governments that already have a debt problem that is out of control will be unable to address this unemployment problem and anarchy and lawlessness will be the outcome. The scene is being set for the Biblical prophesied rise of the Antichrist and a one-world government. The recent UN Pact for the Future where world leaders made a landmark declaration pledging concrete actions towards a safer, more peaceful, sustainable, and inclusive world for tomorrow’s generations is a significant move towards a one-world government. My next post will give more detail on the Pact for the Future.

THE FUTURE WITH AGI AND THE MARK OF THE BEAST

AI is improving at an exponential rate. And we’re quickly reaching a tipping point where the future will look nothing like the past. This point is known as artificial general intelligence (AGI). It is the top level of artificial intelligence. Some even call it humanity’s final invention.

Artificial general intelligence refers to AI that can mimic human cognitive abilities. To put it simply, AI is becoming smarter than the smartest human.

There are already some  signs of what AGI will look like. Last month, OpenAI, the creator of ChatGPT, claimed that its most advanced AI models are now bordering on the second of five levels of “Super AI.” Many people can no longer tell the difference between AI chatbots and human-generated text responses. 

AI will turbocharge the robotics trend. Last week, OpenAI-backed robotics startup Figure AI released a two-minute video of its humanoid robots completing tasks at a BMW plant in Spartanburg, South Carolina (see video below). These machines are now capable of learning from their mistakes and, unlike their robotic arm predecessors, are designed to move in spaces made for humans. That allows them to take on directly competing roles. 

Back in January, Elon Musk’s Neuralink company implanted the first N1 device in the brain of a quadriplegic patient… and it worked. The patient could play chess online and browse the internet with only his mind.

Now, one of Musk’s R1 robots has successfully implanted one of Neuralink’s N1 chips in the brain of a second paraplegic patient. According to Neuralink, the N1 interprets neural activity and makes it available for computers. Then, the person can control external devices with their mind, alone. Musk and his team of researchers and engineers call this “electrophysiological recording.”

According to Musk, Neuralink initially aims to restore mobility in paralyzed people, with subsequent goals of restoring sight to the blind and hearing to the deaf. In short, the N1 device could benefit millions of people with miracle-like cures. If things go as Musk’s team predicts, the paralyzed will walk, the blind will see, and the deaf will hear.

Musk does not know that we are fast approaching the time when the Antichrist and the False Prophet force everybody to take the Mark of the Beast on their right hand or forehead. Could Musk’s Neuralink technology play a role in implementing the Mark of the Beast?

Also it (False Prophet) causes all, both small and great, both rich and poor, both free and slave, to be marked on the right hand or the forehead so that no one can buy or sell unless he has the mark, that is, the name of the beast or the number of its name.Revelation 13:16-17

Church time is short: let us make sure we are in step with the Holy Spirit. He will direct our steps if we allow Him. Like Jesus in the Garden of Gethsemane, we need to say not my will but yours be done this day and every day until Jesus returns.

Father, if you are willing, remove this cup from me. Nevertheless, not my will, but yours, be done.” Luke 22:42