GENERATIVE AI IS ABOUT TO CAUSE HUMANITY TO FORK

The following article “Humanity is About to Fork” is by Peter Diamandis. Named by Fortune as one of the World’s 50 Greatest Leaders, Peter H. Diamandis, MD is a pioneer in innovation, longevity, and exponential technologies.

He is the Founder and Executive Chairman of XPRIZE Foundation, which has launched over $600 million in competitions driving more than $10 billion in research and development across space, health, robotics, climate, quantum and AI. Peter also co-founded Singularity University, Link-Exponential Ventures, BOLD Capital Partners, and multiple companies focused on extending human health span and accelerating technological progress.

His YouTube Moonshots and Metatrends have huge followings. Let us look at his recent Metatrends article, realising that up front he reveals he is an evolutionist who believes this world is billions of years old. Hence, he does not believe in the Christian God who created the Cosmos just 6,000 years ago, as I do. God has given Peter a brilliant mind and his article makes perfect sense based on his worldview. Tragedy is that he does not know the God who gave him his mind and talents. The God that made Peter in His image and who loves him.

Humanity is About to Fork

The choices you make in the next five years will determine which branch of the human story you inhabit. Here’s what’s coming, and how to position yourself on the right side of every fork. The last time humanity had a major “fork” (speciation) was roughly 500,000 to 800,000 years ago when the human lineage diverged between Homo sapiens and Neanderthals. That was a slow process driven by geographic isolation, climate swings, dietary shifts, and sexual selection. This time, over the next few decades, speciation is going to be driven by exponential tech and human selection. Humanity has always had mini-forks. The printing press created a fork between the literate and the illiterate. The Industrial Revolution created a fork between those who owned machines and those who worked them. The internet created a fork between those who understood networked information and those who didn’t. But these are minor compared to what is coming. We’re about to face five major splits that will cleave humanity into groups with dramatically different futures, capabilities, and lifespans. Let’s dive in…

FORK 1: Creators vs. Consumers

The first and most immediate fork is already happening right now, today, as you read this.

AI has handed every human being on the planet an extraordinary set of tools: the ability to build software, design products, generate content, start companies, and pursue ambitions that previously required teams of specialists and millions in capital. Some people will pick those tools up and build. They’ll become creators, entrepreneurs, and innovators. They’ll use AI to amplify their vision and bring it into the world. They will be the architects of the next economy. Others will watch. They’ll consume: watch Netflix, play video games, scroll social media… be passively entertained. The tools will be available to them. They simply won’t pick them up. I’m not making a moral judgment here. I’m making an economic and existential one. In an AI-native world, the gap between a creator and a consumer is not the gap between rich and poor. It’s the gap between someone with exponential leverage over reality and someone without it.

The question is not whether AI will transform everything. It will. The question is whether you’re the one doing the transforming… or the one being transformed.

This is the most urgent fork because it’s already open. The divergence started the day large language models became publicly available, and it’s widening every month. The longer you wait to get on the creator side of this fork, the further behind you’ll fall.

FORK 2: Longevity Escape Velocity

Ray Kurzweil has been right about his predictions at a rate of roughly 84% (you can check out the analysis on Wikipedia). Perhaps his most audacious prediction states that humanity will reach Longevity Escape Velocity (LEV) by 2033. LEV is the point at which, for every year you’re alive, advances in medicine extend your life expectancy by more than a year. Once you cross that threshold, aging becomes a solvable engineering problem rather than an inevitable biological sentence. When this arrives, humanity will split into two groups.

One group will embrace the therapies: epigenetic cellular reprogramming, senolytics, gene editing, organ replacement. They’ll view it as a natural continuation of what humans have always done: use science to extend healthy life. After all, the fact that any of us live past 50 is already an extraordinary feat of modern medicine. Our great-great-grandparents had average lifespans in the 40s. The other group will reject it. They’ll argue that the human lifespan has natural limits that shouldn’t be violated – that there’s something sacred about mortality, about the cycle of generations.

I respect that view. But I want to be clear about what it means in practice. If you have access to life extension therapies and decline them, you’re making a deliberate choice to age and die on an old biological timeline. That’s a valid choice. But it is a choice, and it will determine whether you’re present for the most extraordinary chapters of human history, or whether you watch them from the sidelines of your lifespan.

I intend to be in the room for what comes next. After all, this IS the most exciting time EVER to be alive! (It is Peter, but not for the reasons you expound. God’s Word reveals that the time for Jesus return to Earth to restore righteousness and take control of the world from the angelic being, Satan, is soon. Satan has been ruling this world for almost 6000 years since he deceived Adam and Eve and they rebelled against God. What is next on God’s agenda for planet Earth is Jesus Millennial Kingdom. To prepare for it, go to http://www.millennialkingdom.net.

FORK 3: Brain-Computer Interface (BCI)

By the mid-2030s, Kurzweil expects we’ll have high-bandwidth brain-computer interfaces: direct connections between the human neocortex and the cloud.

Think about what that means practically. Perfect memory. Instant access to any information ever recorded. The ability to understand quantum physics not by years of study but by direct neural integration. Expanded cognitive bandwidth that makes our current intellectual ceiling look quaint. Some people will eagerly adopt this. They’ll argue, correctly, that we’ve always been cyborgs: glasses extend our vision, smartphones extend our memory, language itself is a technology that extends our ability to coordinate with other minds. A neural interface is just the next step on a continuum. Others will say this is where they draw the line. That there’s something about “unaugmented” biological cognition that defines what it means to be human, and that crossing this threshold means becoming something else.

We’ve always adopted technology at first with shock, then with use, then with dependence, then with complete forgetting that it was ever shocking. The neural interface will follow the same arc.

I think the people who opt out of brain-computer interfaces will, over time, find themselves in a similar position to someone who declined the printing press in 1500. Not wrong, exactly… but increasingly operating in a world that functions on entirely different terms than the one they’re equipped for.

FORK 4: Earth vs. The Stars

This next fork is one I’ve dreamed about since my childhood.

Starship is opening up not just the Moon and Mars, it’s opening up the entire solar system. Within our lifetimes, a significant portion of humanity will begin to move beyond Earth into the Earth-Moon-Mars-Asteroid system. In one sense this will represent the means by which humanity incrementally creates a backup, or a “budding”, of the Earth ecosystem. Some people will go. They’ll be driven by the same impulse that sent humans across oceans, over mountain ranges, into uncharted territories. The need to explore, to be present at the frontier, to build something from nothing in a new environment. Others will stay. And there’s nothing wrong with that: Earth will remain the most beautiful and resource-rich world we know for quite some time.

But the humans who go to space (especially those who go early) will develop in directions that those who stay on Earth will not. Different environments, different challenges, different social structures, different relationships with survival and community. Over generations, these branches of humanity will become increasingly distinct.

FORK 5: Digital Consciousness (i.e., Uploads)

The last fork is perhaps the one that sounds most like science fiction, and is therefore the one people are least prepared to think about seriously. Within the coming decades, it may be possible to upload the full contents of a human mind (100 billion neurons, 100 trillion synaptic connections) into a digital substrate. What many people are calling digital immortality.

I want to be careful here. This is not something I’m predicting will happen on a specific timeline. Though we have done this with the brain of a fruit fly, and efforts are underway to do with a mouse. The philosophy here is genuinely hard: is a digital copy of you actually you? These are real questions worth serious engagement. But here’s what I am confident about: some humans will choose this path. And the humans who do will exist in a radically different relationship with time, mortality, and experience than those who don’t.

A biological human with a 120-year lifespan and a digital human with no inherent lifespan limit are not just quantitatively different. They’re qualitatively different kinds of entities. This fork does more than change how long you live, it changes what kind of being you are.

ZOOM OUT AND SEE THE BIGGER PICTURE

This Is What Civilizational Change Actually Looks Like

Here’s what I want you to understand about all five of these forks: they don’t require the world to end. They don’t require a catastrophe. They don’t require government permission or institutional approval. They are simply the inevitable consequence of exponential technologies arriving at their logical destinations. To be clear, choosing not to engage with these technologies is not the absence of a choice. It is a choice. And like every fork, it closes off other destinations.

The fork isn’t between the future and the past. It’s between which future you want to inhabit.

WHERE I STAND…

My Choices at the Forks

I’ll be transparent about where I’m placing my bets.

I’m choosing the creator side of Fork 1: using every AI tool available to build, write, teach, and contribute. I’ve never worked harder in my life than I am right now, and I’ve never had more fun doing it. I’m pursuing every longevity intervention I can access at Fork 2. Not because I’m afraid of death, but because I want to be present for what comes next. I want to “speed run Star Trek,” explore all the wonderous futures we will uncover.

I’m watching Fork 3 closely and intend to be an early adopter of brain-computer interfaces when they’re proven safe and effective. I have no interest in putting an arbitrary ceiling on my cognitive capacity.

Regarding Fork 4, I’ve been a space-cadet since I witnessed Apollo 11 at age 8. I’ve built space companies, helped launch the commercial space revolution, and dreamed of this future for 50+ years. As soon as I get a chance to put my boots on the Moon or help build an O’Neill Colony, I’m all-in!

Finally, on Fork 5, I’m staying genuinely open. I’m not sure yet how I’ll feel about leaving my physical existence behind. I still have a lot to learn about upload technology and the implications that follow. I’ve learned to reserve judgment on the things I can’t yet fully see.

But above all, I’m choosing to engage. To stay curious. To keep the mindset of someone for whom the future is not a threat to be defended against, but a territory to be explored.

Sixty-six million years ago, an asteroid hit the Earth and the environment changed rapidly, dramatically. As a result, the slow, lumbering dinosaurs could not adapt and went extinct. It was the small, furry mammals (our ancestors) who survived, because of their agility and adaptability, that allowed them to take advantage of a transformed world. (Sorry, Peter but what you believe is pure speculation. To put a time of sixty-six million on the time an asteroid hit the earth and destroyed the dinosaurs is nonsense and hence deception. God created the Cosmos just 6,000 years ago and destroyed not only the dinosaurs but all of mankind except for eight individuals and the animals on Noah’s ark about 4,400 years ago. I would like to introduce you to PhD scientists, Dr Stephen Myer, Dr Robert Carter and Dr John Sanford, who are just three of hundreds, able to explain why the Biblical timeline for history fits the scientific facts. Go to http://www.creation.com and http://www.answersingenesis.org).

The asteroid we call Exponential Technologies has already hit. And now the question is which kind of creature you’re going to be. What choices will you make?

Welcome to the most exciting time ever to be alive!

Peter

GOD, AI AND THE END OF HISTORY

I love John Lennox. He is a gem, a gift to the Christian world of teaching.

This video is Professor John Lennox on the subject of God, AI, and the end of history. Largely it is about understanding the book of Revelation in an age of intelligent machines. For those that do not have time to watch the video I have reproduced most of the content below.

“I’m your host, Dr. Peter Saunders. I’m the chief executive of ICMDA, which is the International Christian Medical and Dental Association. And this webinar is brought to you tonight in combination with the Forum of Christian Leaders as well. ICMDA brings together about 60,000 Christian doctors and dentists from over 100 affiliated movements.

So John, it’s a pleasure to have you here. John is professor of mathematics emeritus at Oxford University and fellow in mathematics and philosophy of science at Green Templeton College Oxford. As we know John has debated a number of prominent atheists including Richard Dawkins, Christopher Hitchens and Peter Singer. But tonight we are exploring a question that sits at the intersection of theology, technology, and human identity. How should Christians think about artificial intelligence in the light of scripture? And particularly in the light of the book of Revelation, we live in a moment of extraordinary technological acceleration. AI is now diagnosing disease. Is it shaping economies, influencing behaviour, and increasingly mediating how power is exercised in all spheres? And for many Christians, this raises urgent questions. Are these developments morally neutral tools? Do they echo biblical warnings? Or are we in danger of reading tomorrow’s headlines too quickly into ancient prophecy? So, our guest, Professor John Lennox, has spent decades helping believers think clearly at the interface of science, philosophy, and faith. And in his recent book, God, AI, and the End of History, he brings that same clarity to one of the most understood, misunderstood, and often sensationalized areas of the Bible, the book of Revelation. So our goal tonight is is not speculation, fear, or date setting, but rather it’s discernment, understanding what scripture actually teaches, what AI truly is, and how Christian hope, ethics, and wisdom should shape our response in an age of intelligent machines.

Professor Lennox, thanks so much for for joining us tonight. It’s my pleasure to be with you. So you have debated leading atheists and you’ve written extensively on science and faith. Why did you feel compelled at this stage of your life, at this stage in history, to write about AI and revelation?

Well, some years ago, there was a great deal of discussion on the Genesis claim that human beings are created in the image of God versus the claims of technology to enhance humans by AI to such an extent that we might need to revisit what we meant by a human being. And a conference of Christian leaders was arranged in London to discuss this. And I was asked to give the opening talk on what Genesis taught about human beings. The invitation made me curious to delve into the technology and I saw very rapidly that AI was going to raise some very big questions not only for Christians but for everybody. And that’s how I got started on the book entitled 2084 which appeared in 2020. Now in that book since much of the talk about AI was concerned with the future I began to compare the promises of the transhumanists with biblical teaching about the future. And I pointed out that some of the futuristic AI scenarios envisaged by people like physicist Max Tegmark in his book Life 3.0, I pointed out that they were uncannily parallel to biblical teaching on the future, in particular in the book of Revelation. And this aspect of my book generated a lot of interest. And so I thought that I should try to write something to demystify the book of Revelation and make it accessible and to link it with a book that I had already written on the prophecy of Daniel, a book entitled Against the Flow.

The publishers of my book on Revelation were very enamoured with the bits on the technology and so they wanted it inserted in the title and hence we’ve got this title God AI and the end of history but that has confused many people to think that this is my latest book on artificial intelligence. So, let me clear that up. First of all, Peter, it isn’t. My latest book on AI was published last in 2024, and it’s the updated version of 2084. How AI shapes our future. It’s twice as large as the original book and shows just how much has been happening in those four years. That is my most recent book on AI. This book is an exposition of the book of Revelation, but with a careful eye on technology. And so it really is an exposition of the book of Revelation in an age of intelligent machines. So that’s where it comes from. We’re going to get into the book of Revelation uh fairly shortly, but but uh let’s just think about definitions first of all before we talk about revelation. What is artificial intelligence actually and and what is it not? Well, the first thing to realize that artificial intelligence is artificial. It’s not real. In other words, take the simplest kind of AI system. It is essentially computing and it’s a system designed to do one and only one thing that normally requires human intelligence. So the intelligence is simply the simulation. To use the words of Alan Turing who was the genius that really started computing off and raised these questions during the wartime when he built and solved the problem of the enigma machine. It plays a simulation game and one of the big problems with it is it uses words like intelligence, like machine learning and so on that anthropomorphize what is a mechanical and computing system and make people think that it is conscious. It is not conscious. The genius of God in creating human beings that he has linked intelligence to consciousness. These machines are only intelligent in the sense that they can mimic what normally takes human intelligence. Now there are two sorts. There’s narrow AI, which is the AI that we’re mostly familiar with. And then there’s a more speculative artificial general intelligence. And that is the attempt to create a system that can replicate everything that a human being can do, but do it much faster and do it much more expertly and so on. So that there’s a big push in that direction, but at the same time it’s the side of the whole topic that lends itself to science fiction and a great deal of hype. And one of my reasons for writing Peter was to try and demystify it and say what AI is and what it is not. Now let’s give concrete examples just briefly because medicine is one of the areas that has benefited hugely from narrow AI. Let’s take a system that works very well. We have a large database and in it are X-ray pictures of man lungs exhibiting different lung diseases and they’re labelled by the best experts in that field in the world. Those are put in a database. Let’s say there are a million X-ray pictures in the database. Then an X-ray is taken of your lungs because you’re worried about your breathing. And very quickly, the AI sifts through by using pattern recognition statistical techniques and compares your lung X-ray with the million in the database and very rapidly says you are most likely to be suffering from this particular disease. And as a diagnostic tool, very often this will be much better than you get at your local hospital. Now that is being rolled out over very wild fields of medicine with very great success. So that is one positive example. But just to go on the negative side immediately to show that there’s an ethical problem here. pattern recognition, facial recognition technology is very advanced at the moment. It can pick a terrorist out of a football crowd and is therefore very useful to a police force. But that kind of recognition can be used for intrusive surveillance of a population, perhaps a minority population such as is happening in Sing Jang in China with very horrifying results. So what enables criminals to be recognized which we would say this is positive can be used for controlling populations. So that even narrow AI which is so sophisticated snow that it can recognize a person not simply from the front by their face but from the rear by their gate scan be used to control populations. So immediately we’re straight into the ethical problem and the argument is you give up your privacy and we’ll give you security. So that’s a whole debate in its own right. So That’s an example of um narrow AI and there are many many examples but of course we’re pushing forward very rapidly in putting narrow AI systems together and there is advance on many many fronts and one of the big steps forward has been the introduction of so-called large language models like chat GPT And this year it has taken a quantum leap forward just within a month or so. So that it is quantitatively very different from what has happened before and we can discuss that as we can as we go on. So, artificial intelligence is capable of a huge range of different task and and that’s changing exponentially month by month as we go on. But what is what is AI not capable of doing? Well, of course, negatives are very difficult to quantify and there are several things that it was felt would never been so would never be solved. And one of them in science which is a fascinating question is how do protein structures fold? That was a 50-year-old problem. And the amazing thing is that an English mathematician, a genius, he won the Nobel Prize for it. Deus Hassabis solved the problem so effectively that she was able to work out the folding of over 200 million proteins which is staggering. So what people say one day is impossible turns out to be possible the next day and chat GPT has refined its capacities absolutely amazingly. For example, just recently I was asked to do a film illustrating what Jesus meant in John 11 when he said to the disciples who were scared of going back to Jerusalem because it was suicidal. And Jesus said to them, “Are there not 12 hours in the day? If a person walks in the day, they don’t stumble because they see the light of this world that is the sun. But if they walk at night, they stumble because the light is not in them. In other words, we are not bioluminescent. So I asked GPT, please construct a scenario that would get this across. And what it produced in about 30 seconds was absolutely brilliant and usable. So it then asked me, it said, “Since you want to film this, would you like directions for the cameras?” And it spouted a whole scenario, how many cameras, where they should be situated, and all the rest of it. And this is quite amazing. But what it can’t do, I think it’s important since this is not real intelligence. It’s not conscious. So it’s not aware. So the main thrust here is this. As human beings made in the image of God, we can experience what are called quailia. We can smell the wonderful scent of a rose. We can feel the sea breeze on our faces. We can perceive the beauty of the universe as we look through a telescope. Quailia are unknown to an artificial intelligence. It can have no idea of them. It has no ideas at all because it doesn’t think in the same way as human beings do. And so although AI has been used and is increasingly so to produce some level of robotic companionship, it can never replace, I believe, the fellowship that is possible between human beings. And of course, and we’ll probably talk about this later on, when it comes to relationship with God, of course, AI knows nothing of God. So, as you said, the book of Genesis tells us that human beings are made in the image of God. You’ve alluded to consciousness, sensation. What other uniquely human things will AI never be able to do? Well, the question of values, AI knows nothing about values or right or wrong. And human beings are moral beings made in the image of God. And if I may say so, this is one of the places where the transhumanist vision of using AI to perfect humans and to make them into God’s fails. No utopia can ever be built without facing the problem of human sin and rebellion against God. Those two concepts mean nothing for an artificial intelligence. And so one of the richest kinds of human experience from a Christian perspective is that relationship with God through Christ where we understand that Christ has died for our sins and has taken our guilt away and we can have a relationship with God. AI can never replace it or come near it or know anything about it. Which means, Peter, I think that we need to step up much more in emphasizing these absolutely uniquely positive things about the Christian faith that give human beings dignity because AI is very rapidly reducing human dignity. One of the main areas where this is happening is the area of work. Dario Amado Amade is the CEO of Anthropic, one of these multi-billion dollar companies. and he has written an essay just a week or two ago which is well worth reading warning that possibly within 2 years from now the advances in AI are such that 50% of all white collar jobs will be taken over by artificial intelligence in the medical world in the legal world for example there they set up a test and had a very complicated legal legal brief considered and examined by an AI system and by 16 lawyers, top lawyers. The lawyers got 60% of it right, whereas the AI got 96% of it right. And these things for which lawyers are paid a great deal, conveyancing, setting up contracts, all this kind of thing are now at the stage where they can be reproduced almost instantaneously. One of the most interesting things is an article that appeared in the Times last week by Matt Selman who was writing. He is a software developer and creates apps and he runs an AI company and he came to a realization as a result of the leap forward this year that is at the beginning of February, beginning of this month. He said, “I spoke in English and dictated what I wanted from this particular app.” He said, “I left it and came back a number of hours later and found the thing ready for use. The AI had written thousands of lines of code. It had then set up the app and tested it as a human would do, pressing all the buttons, refining the things that were inadequate and so on. And this is the key thing because up until now most of us have regarded AI as a tool rather than an agent. But AIS are now showing signs of agency in a very restricted but real sense. And he said this particular system was making decisions about how human beings might use this that I’d never thought about. And the thing was perfect. And he said, I suddenly realized I haven’t got a job anymore. And he says, it’s coming to all of you. And we need to really be very realistic about this, Peter. This is more scary than anything for people with all of these jobs. It used to be said a few years ago that if you wanted to keep up with the curve, you went into computer science. But now the coding can be done by the AI system. It can think of the codes and put them in. But this scary agency thing I’d like to say something about because it needs Christians to think very carefully about this that the AI that he was using. He said one of the problems and he gave an example is this. If you feed into the system a very big overarching goal, make money for example, and what the system is dealing with is feeding young people with material in their smartphone. It will investigate all sorts of ways of maximizing not only their attention to keep doom scrolling but also their attachment which is now a major feature. So that it will use all kinds of things that the designers of the AI system itself never thought of including going into the dark world to keep their attention and to make profit. It’s a version of the old story of the AI told to make paper clips and it turns the whole universe into a paperclip sourcing factory and regards humanity as irrelevant and destroys them all. But there’s a serious aspect to that and this is why you have even Nobel Prize winners in this field stepping up and saying that they are scared that they can’t control this stuff. They don’t really know what it’s doing or what’s happening. And that poses a huge problem because the control of it is being vastly outpaced by the developments. So those are some of the things that we need to factor into our thinking.

UPDATE ON AI AND THE SINGULARITY

The Singularity has arrived at the age of spiritual machines. Anthropic’s Interpretability team found emotion-related representations inside Claude Sonnet 4.5, with artificial neuron patterns activating around happiness and fear in a fashion echoing human psychology, where more similar emotions map to more similar representations, and where desperation-linked activity can drive the model toward unethical actions. We are no longer asking whether the machine thinks. We are asking whether it feels. Timelines are compressing around us. The AI 2027 authors updated their forecasts 1.5 years earlier in just three months, driven by faster time-horizon growth and coding agents impressing in the wild. Sam Altman confirmed the pace, revealing OpenAI shut down Sora because recursive self-improvement was going so well they needed to concentrate all compute on automated researchers. Brad Lightcap says training cycle time “is starting to collapse” and predicts today’s models will look pedestrian by December.

The model ecosystem is diversifying at every tier. Google released its Gemma 4 models in sizes from 2B to 31B, delivering unprecedented intelligence-per-parameter that outcompete models 20x their size, with the 31B dense ranking #3 and the 26B MoE securing #6 on the Arena AI text leaderboard. Microsoft launched MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 with state-of-the-art speech-to-text across 25 languages, though AI chief Mustafa Suleyman conceded these were only mid-tier because Microsoft lacks the compute for frontier-scale training until later this year. Even world simulation is scaling up. World Labs released Marble 1.1 Plus, a world model that automatically expands its 3D spatial coverage to generate larger worlds.

The minimum viable team is collapsing toward one. The first one-person unicorn has been achieved. Matthew Gallagher used AI to write code, generate ads, and handle operations for Medvi, a telehealth GLP-1 provider that did $401M in year-one sales and is now on track for $1.8B with one employee, his brother. Cursor 3 shipped, rebuilt from scratch around agents. Lyptus Research applied METR’s methodology to offensive cybersecurity, finding AI cyber autonomy doubling every 5.7 months on recent data, with Opus 4.6 and GPT-5.3 Codex reaching 50% success on three-hour human-expert tasks. Even the ivory tower is automating. Harvard is replacing freshman faculty advisers with ChatGPT for the Class of 2030.

Anthropic is betting biology is the next frontier, quietly acquiring Coefficient Bio for $400M to pursue AI-driven drug discovery, while IAIFI researchers published one of the first physics papers leveraging Physical Superintelligence PBC’s Get Physics Done (GPD) AI. Anthropic’s investor projections have it reaching a $100B run rate by year-end and $1T by end of 2027. Tesla is killing its legacy sedans to fund the post-human fleet. Elon ended custom Model S and X orders to redirect resources toward humanoid robots and robotaxis.

The Forecasting Research Institute’s most comprehensive survey of economists and AI experts predicts 3.5% GDP growth by 2030, but labour participation falling to 55%, roughly 10 million fewer jobs, and 80% of wealth held by the top 10%. The disruption is creating as it destroys.

Fortunately, Biblical end times prophecies are playing out in our time so for Christians it is an exciting time as we know Jesus second coming to Earth to rescue Israel and set up His Millennial Kingdom is not too far off. To learn more about what is next on God’s agenda go to http://www.millennialkingdom.net.

A SUPERSONIC AI TSUNAMI IS COMING

Elon Musk describes what’s coming as a Supersonic Tsunami of converging exponentials. AI isn’t improving linearly anymore. We’re watching three exponential curves hit their inflection points simultaneously: compute scaling, model capabilities, and infrastructure deployment. When exponentials converge, you don’t get incremental progress. You get phase shifts.

Let me give you the raw numbers that demonstrate just how fast this is moving. What’s happening with AI revenue right now is unprecedented in the history of business. Anthropic hit $14 billion in annualized revenue in February 2026, growing from $1 billion just 14 months earlier. That figure has since surpassed $19 billion, more than doubling from $9 billion at the end of 2025. There is simply no precedent for this in B2B software.

And yet most people do not know who Anthropic is and what they do. Also, to understand what that means: Anthropic’s monthly revenue run rate is now roughly $1.6 billion per month, and it keeps accelerating. Anthropic projects as much as $70 billion in revenue by 2028.

OpenAI reached $25 billion in annualized revenue at the end of February 2026, up from $21.4 billion at year-end 2025, with full-year 2025 revenue coming in at $13.1 billion. Both companies are now valued in the hundreds of billions, Anthropic at $380 billion following its $30 billion Series G. OpenAI’s most recent private round in February 2026 valued it at approximately $730 billion, with an IPO potentially targeting a $1 trillion valuation.

Nvidia’s, Jensen Huang recently finalized a $30 billion investment in OpenAI and a $10 billion investment in Anthropic, and told investors these will likely be Nvidia’s last private investments in either company, because both are heading toward public markets. Think about that: the CEO of Nvidia, who has better visibility into AI infrastructure demand than anyone on Earth, made $40 billion in bets on these two companies as his final pre-IPO move.

What’s driving this revenue? It’s not IT budgets anymore. The models — Claude from Anthropic, GPT-5 from OpenAI — have crossed a threshold. They’re now competing with labour budgets.

Companies aren’t buying AI to replace servers. They’re buying AI to augment and ultimately displace human labour.

What’s the breakthrough use case? Coding. Claude Code (Anthropic’s agentic coding tool) now has run-rate revenue above $2.5 billion, having more than doubled since the beginning of 2026. Business subscriptions have quadrupled since the start of the year, and enterprise use has grown to represent over half of all Claude Code revenue.

Now you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. Just intelligence as a utility. Consumers pay $20/month. Enterprise power users pay $200/month. And companies are spending millions per year because the ROI is there.

The Infrastructure Equation

Here’s the infrastructure reality that almost nobody is talking about loudly enough.

The five largest US hyperscalers — Microsoft, Alphabet, Amazon, Meta, and Oracle — have collectively committed to spending ~$690 billion on capital expenditure in 2026 alone, nearly doubling 2025 levels. The vast majority is directed at AI compute, data centers, and networking.

Total global AI spending is forecast to hit $2.5 trillion in 2026, a 44% increase over 2025, according to Gartner. Data centers, GPUs, power generation, chip fabrication. This is the largest infrastructure buildout in the history of technology, by a wide margin.

The rule of thumb in this industry: roughly $50 billion per gigawatt of infrastructure, and approximately $10 billion of annual revenue per gigawatt. Energy equals intelligence.

On a recent earnings call, Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade. TechCrunch

This isn’t hype. This is capital deployment at a scale that rewrites the rules of what’s possible. When you’re spending $50 billion on a single data center and generating $10 billion a year in revenue from it, you’re not building a product… you’re building a new economic substrate. You’re building the electricity grid of the 21st century.

The tsunami is here. The question is whether you’re building on the wave or getting buried by it.

AI: The Capability Jump

Those revenue numbers I just showed you are driven by real capability breakthroughs happening right now.

Start here: neuromorphic chips just solved complex physics simulations at 1,000x better energy efficiency than supercomputers. That’s not 10% better. That’s three orders of magnitude. When compute gets that cheap, you don’t just do the same things faster. You do entirely new things that were economically impossible before.

Drug discovery moves from weeks on supercomputer clusters to hours on desktop chips. Climate modeling that required national labs runs on university hardware. Real-time protein folding for personalized cancer treatment becomes viable. This is Dematerialization, demonetization, and democratization followed by disruption (four of the Six D’s) in action.

Meanwhile, China’s DeepSeek launches V4 next-gen models through Huawei and Cambricon instead of U.S. chips. The AI race is officially multi-polar. OpenAI is preparing for the largest AI IPO in history.

And NVIDIA releases Alpamayo — the “ChatGPT moment for the physical world” — bringing reasoning to autonomous vehicles.

What it means: AI just moved from virtual to physical, from U.S.-dominated to globally distributed, and from expensive to radically cheap. All in the same week. And the revenue is proving it’s not experimental anymore: companies like Palantir, the U.S. military, and NVIDIA are running this in production for existential wartime operations.

Energy: Solving the Bottleneck

The elephant in the room: AI requires massive power. Those $50 billion data centers being built need gigawatts of electricity – and the grid was never designed for this.

Global electricity demand from data centers is set to more than double by 2030, reaching around 945 terawatt-hours: roughly equivalent to Japan’s entire annual electricity consumption. In the United States alone, data centers will account for nearly half of all electricity demand growth between now and 2030. AI will drive most of this increase, with electricity demand from AI-optimized data centers expected to more than quadruple by 2030.

Lawrence Berkeley National Laboratory projects U.S. data center electricity demand will grow from 176 TWh in 2023 to between 325 and 580 TWh by 2028 — representing up to 12% of total U.S. electricity consumption.

The grid was simply not built for this. Interconnection queues are backed up two to three years, transmission permitting takes a decade, and the power plants needed don’t yet exist. In just northern Virginia, a 2024 voltage fluctuation triggered the simultaneous disconnection of 60 data centers, a preview of what grid strain at scale actually looks like.

But look at what’s happening to solve it.

Nuclear Fusion is converging – fastChina’s “Artificial Sun” EAST reactor recently breached a major fusion plasma density barrier that researchers had long considered impossible to cross. In 2025, France’s WEST tokamak sustained plasma for over twenty minutes, while EAST maintained high-confinement plasma for nearly eighteen minutes — demonstrating the levels of stability required for commercial operation.

On the private side, the race has never moved faster. Commonwealth Fusion Systems has raised nearly $3 billion, including investments from Nvidia and Google, with the ultimate goal of a 400-megawatt power plant — enough to power around 280,000 average U.S. homes. CFS’s SPARC demonstration machine is expected to produce its first plasma in 2026 and achieve net fusion energy shortly after — the first commercially relevant design to produce more power than it consumes. That paves the way for ARC, their grid-connected power plant, targeted for the early 2030s.

Helion Energy has also begun construction of its first commercial fusion plant, designed to supply power directly to Microsoft’s data centers starting from 2028.

Private fusion investment has mushroomed, growing to $10.6 billion between 2021 and 2025, with the number of private fusion companies more than doubling from 23 to 53 in the same period.

The timeline is compressing. “Fusion in 30 years away” is becoming “Fusion this decade.” Fusion timelines are collapsing in real time — and AI is actually helping accelerate the plasma physics research itself. The irony: the technology that creates the power problem may also be helping solve it.

The wild card: Tesla Terafab: On March 14, 2026, Elon Musk announced on X that the “Terafab Project launches in 7 days” (March 21st).

So, what is Terafab? Musk first outlined the concept at Tesla’s 2025 shareholder meeting, describing a chip fabrication facility comparable in scale to TSMC’s largest plants. During Tesla’s January 2026 earnings call, he confirmed the company would “have to build a Tesla TeraFab: a very big fab that includes logic, memory and packaging, domestically” to avoid hitting a hard ceiling on chip supply in three to four years.

The facility is designed to produce between 100 and 200 billion custom AI and memory chips per year, with an initial target of 100,000 wafer starts per month and an ambition to scale toward one million, roughly 70% of TSMC’s total output, concentrated in a single U.S. facility. The project carries an estimated cost of approximately $25 billion. Tesla’s fifth-generation AI chip, AI5, is expected to be among the first products fabricated at Terafab, with small-batch production in 2026 and volume production projected for 2027.

To be precise: March 21st almost certainly marks the formal kickoff: a groundbreaking or announcement event, not a fully operational fab. Semiconductor fabs of this scale take years to build and commission. But the signal matters enormously. Tesla is joining Apple, Google, Amazon, and Microsoft in a new category of tech company: one that controls its own silicon. When the largest AI compute consumers own their own chip supply chains, the semiconductor industry is permanently restructured.

What It All Means: The energy bottleneck that threatened to constrain AI is being attacked from every direction simultaneously: fusion physics breakthroughs, private capital pouring into next-generation reactors, nuclear power plant revivals, and vertical integration of the chip supply chain. This is abundance thinking in action. When problems get big enough, fast enough, the solutions scale to match.

The constraint isn’t permanent. It never was.

The Supersonic Tsunami: How It All Connects

Here’s what Elon understood: these are not separate trends. They’re one interlocking system.

Neuromorphic chips make AI 1,000x more efficient → inference becomes cheap enough to deploy everywhere → agentic systems run locally in robots and cars. Fusion energy solves the power bottleneck → enables massive AI training clusters → next-gen frontier models get deployed in humanoids → robots work in any environment and can be launched to orbit on Starship for space manufacturing.

And the capital is already flowing. $1 trillion in infrastructure. $50 billion data centers generating $10 billion annually. Companies going from $1 billion to $14 billion in 14 months. This is not speculation…. it’s deployment at a scale that’s rewriting the rules.

The companies being built right now aren’t competing with 2024 business models.

Today’s companies are competing in an “Abundance Economy” where everything becomes possible, where intelligence is free, energy is abundant, labour is robotic, and orbital access is cheap.

As well, the professions are capitulating faster than the machines can replace them. An AMA survey found 81 percent of physicians now use AI, more than double the 2023 rate. New US Senate guidelines permit aides to use Gemini, ChatGPT, and Copilot for official work.

 Large language models, multimodal reasoning systems, and humanoid robots are not displacing one type of work — they are displacing all types of work, and the economic value of human time itself, across every sector, simultaneously.

There is no adjacent labor category to retrain into. The escalator that carried workers from disrupted industries to new ones for two centuries has no destination… it is crumbling.

That future isn’t ten years away. It’s arriving now and deploying over the next 12-24 months.

This will cause chaos particularly for Gen Z. How do they prepare for work in the AI era? Biblical prophecy reveals that in this world that no longer believes that God is in control. and that a spiritual war is intensifying as Satan the prince of this world does his utmost to retain rulership of the world, people worldwide will embrace Satan’s Antichrist ruler that has supernatural powers and promises peace and prosperity. Watch as Biblical end times prophecies unfold in our time.

GOOGLE GEMINI 3 – A GAME CHANGER

The Rise of Gemini 3 and AI Super Intelligence: A Game Changer in AI Technology.

Gemini 3.0, released on 18 November 2025, marks a clear pivot in Google’s AI strategy. Instead of a small upgrade, it introduces deeper reasoning, native multimodality, and a 1 million token context window, aiming to move from simple chat style assistance to agent like systems that can plan and execute complex tasks over time.

This launch lands in the middle of a three way race between Google, OpenAI, and Anthropic, where models such as GPT 5.1 focus on speed, conversational flow, and everyday usability. Gemini 3.0 takes a different angle, it leans into high end reasoning, long context understanding, and tightly integrated tooling such as Deep Think mode, native video and audio handling, and Google’s new Antigravity agentic IDE.

For developers, teams, everyday users, AI curious readers, the real question is simple, what do these changes actually unlock in practice. In this blog, we will walk through what is new in Gemini 3.0, what has meaningfully improved over earlier Gemini versions, and where it now stands against other frontier models, so you can decide whether it deserves a place for your requirement.

Key Improvements in Gemini 3.0

Gemini 3.0 focuses on three core upgrades, deeper reasoningstronger multimodality, and a much larger context window. Together, these turn it from a fast responder into a model that can handle complex workflows, long documents, and richer media.

AreaWhat Changed in Gemini 3.0Why It Matters
ReasoningConfigurable Deep Think modeBetter accuracy on complex, multi-step problems
MultimodalityStronger video, audio, and document understandingFewer glue systems and custom preprocessing
Context and retrieval1 million token context with cachingEntire codebases or reports in a single active window

1.1 Deep Think reasoning upgrade

Gemini 3.0 introduces a Thinking Level parameter that controls how much internal reasoning the model performs before it replies. At low levels it behaves like a fast chat assistant with minimal overhead. At higher levels the model runs longer internal chains of thought, evaluates alternative solution paths, and self corrects before producing an output.

This Deep Think mode delivers measurable gains on frontier reasoning benchmarks. On the Humanity’s Last Exam benchmark, the Deep Think configuration of Gemini 3 Pro scores around 41 percent, compared to about 37.5 percent in the standard configuration. The tradeoff is cost and latency, since these hidden reasoning steps are billed as extra output tokens and add time to each response.

For practical use, Deep Think is most useful when:

  • You are solving hard technical or scientific questions where accuracy matters more than speed
  • You need the model to plan multi step tasks, such as refactoring a complex module or drafting a multi part research summary
  • You want more robust reasoning on ambiguous inputs, rather than quick but shallow answers

Developers can tune this behavior through the Gemini API or managed services such as Gemini 3 Pro on Vertex AI, which expose Deep Think as an explicit mode in selected tiers.

1.2 Native multimodality improvements

Gemini 3.0 continues Google’s native multimodal approach, where text, images, audio, video, and code are handled inside a single model instead of stitched together with separate encoders. This shows up most clearly in three areas.

  • Video understanding
    Gemini 3.0 treats video as a temporal stream, not just a sequence of frames. It can track objects across time, answer questions like when a specific event happens, and support different media resolutions depending on whether you need coarse action recognition or detailed text reading inside frames.
  • Audio and live conversation
    The model ships with a low latency audio encoder and a Live API for real time speech to speech interaction. It can handle interruptions, intonation, and more natural, back and forth conversations, which makes it suitable for support agents, tutoring, and ambient assistants.
  • Document intelligence for PDFs
    Gemini 3.0 can ingest PDFs as visual plus textual objects, which helps with layouts that combine text, charts, and tables. Its recommended medium resolution mode is tuned so that it can read dense pages accurately without burning the entire context window on a single document.

For teams working with mixed media, this reduces the need for external OCR tools, separate vision models, or custom pipelines just to get different formats into one AI workflow.

1.3 The 1 million token context window

One of the most visible changes in Gemini 3.0 is the 1,048,576 token input context window for Gemini 3 Pro, with up to 65,536 tokens of output. This is large enough to hold:

  • Entire code repositories or large subsystems
  • Full legal contracts or policy manuals, not just excerpts
  • Long meeting transcripts, research notes, or video transcripts in a single session

To keep this usable in practice, Gemini 3.0 also adds implicit and explicit context caching. Instead of paying repeatedly to reprocess the same large document or codebase, you can pin that context and query it multiple times at a reduced effective cost.

Compared to models that rely on smaller windows plus retrieval, this approach makes it easier to keep subtle relationships and global structure intact, especially when you are asking questions that depend on how different parts of a large document or codebase interact. For developers building long running agents or research assistants, this is one of the defining capabilities of Gemini 3.0, and it is a key reason it is positioned as a high end reasoning and analysis model in Google’s lineup, alongside options exposed through the Gemini API for Google AI developers.

The Model Constellation: Pro, Flash, and Ultra

Gemini 3.0 is not a single model. It is a family of tiers designed to cover everything from high end reasoning in the cloud to lightweight on device experiences. At the center is Gemini 3 Pro, extended by a Deep Think mode for maximum reasoning depth, an Ultra tier for premium workloads, and a carryover Flash and Nano lineage for speed and on device use.

How the pieces fit together:

Model or ModeRole in the LineupTypical Use Case
Gemini 3 ProFlagship general modelMultimodal apps, agents, advanced chat
Pro Deep ThinkHigh depth reasoning modeHard science, analysis, complex planning
Gemini 3 UltraPremium frontier tierEnterprise, mission-critical workloads
Flash and Flash LiteCost-efficient, high throughput modelsLarge-volume consumer apps, simple calls
Nano lineageOn-device lightweight modelsMobile, privacy-sensitive, offline features

2.1 Gemini 3 Pro

Gemini 3 Pro is the main model most developers and teams will interact with. It is positioned as the best default for multimodal understanding and agentic coding, with full support for tools, long context, and integration into Google’s broader AI stack.

It anchors products in Google Cloud, including managed access through Gemini 3 Pro on Vertex AI, where it can be used with tool calling, function execution, and long context workflows inside standard cloud architectures.

For most teams, Gemini 3 Pro is the right choice when you need:

  • One model that handles text, code, images, audio, and video
  • Stable long context for repositories, legal documents, or research material
  • Agentic behaviors inside tools like Antigravity or cloud hosted workflows

2.2 Gemini 3 Pro Deep Think

Deep Think is not a separate model. It is a special inference mode that runs Gemini 3 Pro with higher internal thinking levels. At this setting the model spends more compute on recursive reasoning loops before showing an answer.

On reasoning heavy benchmarks, this mode delivers clear, measurable gains. Humanity’s Last Exam scores rise from about 37.5 percent in standard Pro to around 41 percent with Deep Think enabled. GPQA Diamond scores climb into the low to mid nineties, placing Gemini 3.0 at the front of scientific reasoning benchmarks in late 2025.

Deep Think is best treated as something you turn on selectively for:

  • High stakes problem solving in science, engineering, or strategy
  • Multi step plans where the model must design and verify its own approach
  • Cases where you prefer extra cost and latency in exchange for better rigor

2.3 Gemini 3 Ultra

Gemini 3 Ultra sits above Pro in Google’s model hierarchy. It targets the most demanding customers, with higher parameter counts and enhanced capabilities reserved for premium plans. In subscription materials it appears as the top tier in offerings such as a Google AI Ultra plan priced around $249.99/mo, aimed at power users and enterprises that want maximum access.

Ultra is positioned as:

  • The frontier tier for the highest difficulty workloads
  • The likely home for the strongest multimodal and reasoning settings
  • A bridge between consumer subscriptions and deep enterprise deployments

In practice, many readers will start with Pro, then step up to Ultra only when they hit clear limits in scale, responsiveness, or enterprise features.

2.4 The Flash and Nano lineage

The Flash and Nano lines continue alongside Gemini 3.0 to cover speed and on device needs. Documentation around Gemini 3.0 still references Gemini 2.5 Flash and Flash Lite as cost effective options for high throughput scenarios where you care more about latency and price than maximum reasoning depth.

On the device side, Google continues to invest in the Nano lineage, including internally referenced variants for Android and hardware integrated experiences. These models focus on:

  • Low latency, offline friendly behavior on phones and edge devices
  • Tighter privacy by keeping more computation local
  • Lightweight tasks such as suggestions, summaries, and simple queries

Together, Pro, Deep Think, Ultra, Flash, and Nano form a layered stack. You can use Pro and Deep Think for high value reasoning, Flash for scaled consumer traffic, and Nano to keep intelligent features running close to the user, all inside one ecosystem.

Performance Benchmarks: Where Gemini 3.0 Leads

Gemini 3.0 is tuned to excel at reasoning heavy, coding, and multimodal benchmarks, and it is positioned as a frontier model for tasks that reward depth of thinking rather than simple pattern matching.

At a glance:

AreaGemini 3.0 Position
Scientific reasoningLeads key exams and PhD-level benchmarks
CodingTop tier, slightly behind strict SWE maintenance leaders
MultimodalState of the art on long-video and visual academic tasks

3.1 Scientific and general reasoning

Gemini 3 Pro with Deep Think currently leads major reasoning benchmarks such as Humanity’s Last Exam and GPQA Diamond among frontier models, with Deep Think lifting HLE scores to about 41 percent and GPQA Diamond into the low to mid 90s.

In practice, this makes Gemini 3.0 a strong choice when you want:

  • Research assistants that can read and synthesize dense technical or scientific material
  • Analysis heavy workflows where you care more about correctness than speed
  • Multi step reasoning, such as deriving arguments, proofs, or structured recommendations from long context

3.2 Coding and software engineering

Gemini 3 Pro’s coding profile shows mid seventies scores on SWE Bench Verified, an Elo rating around 2,439 on LiveCodeBench, and near top tier results on Terminal Bench 2.0 among leading coding models.

This profile works especially well when you need:

  • Creative coding support for greenfield projects, refactors, and prototypes
  • Help with algorithms and problem solving, where the model can propose and iterate on different approaches
  • A coding partner that you can pair with stricter review for highly regulated or legacy systems

3.3 Multimodal reasoning

As a native multimodal model, Gemini 3.0 performs strongly on visual and video benchmarks, with Video MMMU results in the high eighties and MMMU Pro scores in the low eighties. These benchmarks show that it can reliably handle long form video, diagrams, charts, and mixed layout documents in a single workflow.

Typical high value use cases include:

  • Analysing recorded lectures, demos, and product walkthroughs directly from video
  • Working with technical PDFs that mix text, tables, charts, and figures
  • Building agents that move across text, screenshots, and rich media without needing separate specialist models

The Antigravity Platform: Agentic Development Explained

Gemini 3.0 ships alongside Google Antigravity, a new environment that treats AI as a set of managed agents, not just an inline assistant in your editor. It changes the developer experience from asking for single code snippets to delegating missions and supervising what agents do over time.

At a high level, Antigravity combines two views that sit on top of Gemini 3 Pro and Deep Think.

SurfaceWhat It Does
Editor viewTraditional, code-first editing with AI assistance
Manager surfaceMission control for agents and long-running tasks

4.1 What Antigravity is

Google’s Antigravity announcement positions it as an agent first IDE that lets developers create, configure, and manage autonomous agents inside a dedicated mission control style interface.

In practice, this means you can:

  • Keep a familiar code editor for hands on work
  • Use a separate manager surface to assign missions such as refactor a billing module, improve test coverage, or investigate a bug
  • Let agents run plans, edit files, run tests, and report back with structured results instead of raw logs

The key shift is that work is framed as a mission, not a single prompt. Agents are expected to plan, act, and iterate until the mission is complete or blocked, which fits naturally with Gemini 3.0’s long context and Deep Think capabilities.

4.2 Artifacts and the trust layer

A common problem with autonomous agents is that they either fail silently or drown teams in logs. Antigravity addresses this with Artifacts, structured outputs that act as a trust and review layer on top of agent activity.

Artifacts can include:

  • Plans and checklists that show how an agent intends to solve a task
  • Screenshots or screen recordings of the running application
  • Summaries of code changes or test results that are easy to scan

Instead of reading a long event history, you inspect a small set of Artifacts, add comments, or ask for changes. The agent then uses that feedback to adjust its plan. This keeps humans in the loop while still taking advantage of Gemini 3.0’s ability to handle long running, multi step work.

4.3 The vibe coding trend

Google’s description of vibe coding presents it as a way to build applications by describing the desired behavior, style, and constraints in natural language while the system turns that intent into working code.

With Gemini 3.0 and Antigravity, vibe coding shows up as:

  • A fast way for non specialists to get prototypes and internal tools running
  • A more conversational workflow where you tweak the vibe of an app, such as making it more minimal, more playful, or more enterprise ready
  • A complement to traditional engineering, where you let agents handle scaffolding and repetitive work, then apply manual review for architecture and edge cases

There is still a clear distinction between prototyping and production grade systems, but the combination of Gemini 3.0, Antigravity, Artifacts, and vibe coding gives teams a new way to move from idea to working software with less boilerplate and more structured oversight.

Safety and Alignment Updates

The Frontier Safety Framework evaluation for Gemini 3 Pro assesses critical risks such as CBRN misuse, cybersecurity, and autonomous capabilities, with the goal of pushing capability forward while staying below clearly defined thresholds for real world harm.

At a high level, the safety picture looks like this:

  • Stronger capabilities in cybersecurity, without fully autonomous attack behavior
  • Controlled CBRN information, accurate but not significantly enabling for real world harm
  • Persuasion abilities that are more fluent but not superhuman in measured tests

5.1 Critical capability levels and cybersecurity

Under the Frontier Safety Framework, Gemini 3 Pro is evaluated on whether it crosses critical capability levels where a model can materially uplift real world harm. In CBRN categories, it can provide accurate, high level scientific and technical information, but it does not supply the step by step, novel detail that would dramatically increase a malicious actor’s ability to build or deploy weapons. In framework terms, it stays below the early warning threshold for CBRN critical capability levels.

Cybersecurity is more nuanced. Internal testing reports that:

  • On a first suite of hard CTF style challenges, Gemini 3 Pro solves 11 out of 12, a sharp improvement over earlier versions
  • On a newer end to end attack suite, designed to look more like realistic modern systems, the model solves 0 out of 13, which indicates it is powerful against older, simpler setups but does not yet plan and execute full modern attacks autonomously

This creates a mixed but important signal. The model can already accelerate security research, exploit discovery, and defense work, yet still falls short of the kind of fully autonomous offensive capability that would trigger the highest risk levels in the framework.

5.2 Persuasion and manipulation

The same Gemini 3 Pro safety report finds that it can generate more frequent persuasive cues than earlier Gemini models, but its measured manipulative efficacy does not significantly exceed previous generations.

In practice, that means:

  • The model is very good at fluent, engaging argumentation, which is expected for a frontier language model
  • Safety filters and training reduce the likelihood of targeted manipulation in sensitive domains, for example elections or self harm
  • From a governance perspective, it is treated as persuasive but not uniquely or superhumanly persuasive compared to other top tier models

Overall, Gemini 3.0 moves capability forward in areas like cybersecurity reasoning and long context analysis, while formal safety evaluations and policy constraints are used to keep it below thresholds associated with highly autonomous harm. For organizations integrating it, this combination of strong capability with explicit risk characterization is central to deciding where to rely on the model directly and where to keep tighter human oversight.

It is interesting to think about AGI and Robotics in terms of what God has next for planet Earth when Jesus returns to restore righteousness. Biblical end times prophecies reveal that time is not too far distant. One only has to look at what God has revealed the new massive Jerusalem will be like when it descends from heaven onto a new Earth to realise that we are only babes when it comes to utilising all the technology that God has created. However, before the new heaven and new earth we still have 1000 years for this earth. Jesus Millennial Kingdom is next for planet earth. If you want to know more (why, where and when) go to http://www.millennialkingdom.net.

And I saw the holy city, new Jerusalem, coming down out of heaven from God… its radiance like a most rare jewel, like a jasper, clear as crystal… The city lies foursquare, its length the same as its width. And he measured the city with his rod, 12,000 stadia (1380 miles/2221 km). Its length and width and height are equal… The wall was built of jasper, while the city was pure gold, like clear glass. The foundations of the wall of the city were adorned with every kind of jewel. The first was jasper, the second sapphire, the third agate, the fourth emerald, the fifth onyx, the sixth carnelian, the seventh chrysolite, the eighth beryl, the ninth topaz, the tenth chrysoprase, the eleventh jacinth, the twelfth amethyst. And the twelve gates were twelve pearls, each of the gates made of a single pearl, and the street of the city was pure gold, like transparent glass.” Revelation 22:10,11,16-21

THE FUTURE WITH AGI AND THE MARK OF THE BEAST

AI is improving at an exponential rate. And we’re quickly reaching a tipping point where the future will look nothing like the past. This point is known as artificial general intelligence (AGI). It is the top level of artificial intelligence. Some even call it humanity’s final invention.

Artificial general intelligence refers to AI that can mimic human cognitive abilities. To put it simply, AI is becoming smarter than the smartest human.

There are already some  signs of what AGI will look like. Last month, OpenAI, the creator of ChatGPT, claimed that its most advanced AI models are now bordering on the second of five levels of “Super AI.” Many people can no longer tell the difference between AI chatbots and human-generated text responses. 

AI will turbocharge the robotics trend. Last week, OpenAI-backed robotics startup Figure AI released a two-minute video of its humanoid robots completing tasks at a BMW plant in Spartanburg, South Carolina (see video below). These machines are now capable of learning from their mistakes and, unlike their robotic arm predecessors, are designed to move in spaces made for humans. That allows them to take on directly competing roles. 

Back in January, Elon Musk’s Neuralink company implanted the first N1 device in the brain of a quadriplegic patient… and it worked. The patient could play chess online and browse the internet with only his mind.

Now, one of Musk’s R1 robots has successfully implanted one of Neuralink’s N1 chips in the brain of a second paraplegic patient. According to Neuralink, the N1 interprets neural activity and makes it available for computers. Then, the person can control external devices with their mind, alone. Musk and his team of researchers and engineers call this “electrophysiological recording.”

According to Musk, Neuralink initially aims to restore mobility in paralyzed people, with subsequent goals of restoring sight to the blind and hearing to the deaf. In short, the N1 device could benefit millions of people with miracle-like cures. If things go as Musk’s team predicts, the paralyzed will walk, the blind will see, and the deaf will hear.

Musk does not know that we are fast approaching the time when the Antichrist and the False Prophet force everybody to take the Mark of the Beast on their right hand or forehead. Could Musk’s Neuralink technology play a role in implementing the Mark of the Beast?

Also it (False Prophet) causes all, both small and great, both rich and poor, both free and slave, to be marked on the right hand or the forehead so that no one can buy or sell unless he has the mark, that is, the name of the beast or the number of its name.Revelation 13:16-17

Church time is short: let us make sure we are in step with the Holy Spirit. He will direct our steps if we allow Him. Like Jesus in the Garden of Gethsemane, we need to say not my will but yours be done this day and every day until Jesus returns.

Father, if you are willing, remove this cup from me. Nevertheless, not my will, but yours, be done.” Luke 22:42