
Three years since ChatGPT launched, a combination of hype and fear has made it hard to think clearly about our new age of artificial intelligence (AI). But AI has the potential to change the world—from energy to geopolitics to the global economy to the very production and application of human knowledge. If ever we needed clear-eyed analysis, it’s now.
At the , our experts in the Technology Programs spend a lot of their time thinking about how AI will shape our future—and they have the technical literacy essential to the task. So, as part of our annual Global Foresight report on the decade to come, we asked them our most pressing questions: How will AI evolve over the next ten years and beyond? How can we use AI to forecast global affairs? And—let’s be real—will this thing replace us?
Then our experts put AI chatbots through their paces, presenting them with questions from our Global Foresight survey of (human) geostrategists and foresight practitioners about what the world will look like by 2036. Check out the results of this experiment and our experts’ broader insights in the short videos below, along with edited and condensed highlights from our conversations.
How good is AI at predicting the future?
I would not trust today’s AI systems to reliably forecast global affairs. I think that comes down to the fact that, so often, global events don’t follow predictable patterns. That’s because so much of global geopolitics is driven by human decisions.
— Tess deBlanc-Knowles, senior director of Technology Programs
When you’re asking [AI] to predict the future, you’re asking it a big, unbounded question. What large language models (or LLMs), which is what the current generation of generative artificial intelligence is built on, are good at doing is next-word or next-token prediction.
— Trey Herr, senior director of the ’s Cyber Statecraft Initiative
Right now, I think any policymaker would be very poorly served by, say, pulling up an LLM and asking, “What’s going to happen next?” That’s not really the strength of these modern systems.
— Emerson Brooking, director of strategy and resident senior fellow at the ’s Digital Forensic Research Lab
It is not a crystal ball. In technical terms, AI is probabilistic. It is not predictive or deterministic. A fundamental barrier for artificial intelligence is that it cannot experience the real world. Some of us may be familiar with Plato’s allegory of the cave. AI is kind of like those cave dwellers experiencing the world as shadows and echoes. They’re not living real experiences, and so are limited in that sense. We can, however, envision a world where AI models and human forecasters work together to make better predictions.
— Trisha Ray, associate director and resident fellow at the ’s GeoTech Center
If you asked an AI system to predict the outcome of the Super Bowl, you could equip the model with data from past seasons, the teams, the performance of the players, and the trajectory of those teams over the course of the season. Feed it all of the accurate data of today’s teams and players, and it might come out with some kind of approximation of the top contenders to win the Super Bowl. But the system is not going to be able to predict that rogue tackle that creates a season-ending injury for a star player, or the interpersonal dynamics among the team that can either supercharge their pathway to the championship or totally derail it.
— Tess deBlanc-Knowles, senior director of Technology Programs
AI’s limitation is that it cannot produce new information. It can’t expand the universe of knowledge that we’re currently training on. What it can do is identify novel insights, identify trends that may have taken humans a lot of time to manually produce or see.
— Graham Brookie, vice president for technology programs and strategy
Today’s AI systems are well-suited for predictive tasks where there are stable patterns and there’s a good amount of historical data to train the systems on. So this bears out in near-term weather prediction, traffic patterns, predicting maintenance needs for an airplane or some other complex manufacturing system.
— Tess deBlanc-Knowles, senior director of Technology Programs
How will AI evolve over the next decade?
The growth of AI capability over the past few years has essentially been predictable: It continues to increase exponentially as we devote exponentially more processing power and energy to its needs. But that can’t go on indefinitely. I think soon there will be something that feels like a ceiling.
— Emerson Brooking, director of strategy and resident senior fellow at the ’s Digital Forensic Research Lab
The bubble that is this market is going to pop, and we’re going to see some of these firms fail. You’re going to see others rise up and succeed. Now this could have some really harmful financial consequences for real people, as well as markets in the US, in Western Europe, and elsewhere. But the side effect of that is likely that there is a lot of infrastructure, a lot of computing resources, a lot of talent that’s suddenly available and looking for work and looking for ways to be useful. And that kind of thing can be a really powerful driver of innovation.
— Trey Herr, senior director of the ’s Cyber Statecraft Initiative
Another very significant risk to the progress of AI is trust. I think this is particularly salient in the United States, where recent polls have shown that 60 percent of American adults don’t trust the output of an AI system to be fair and unbiased. I think there’s a scenario where with that baseline level of distrust, if that’s then followed by, say, a series of accidents that could be blamed on AI, or destructive news around AI, then consumers will lose confidence in the technology and businesses will [assume] a higher level of risk in adopting the technology, which will then lead to a cooling in investment and in markets.
— Tess deBlanc-Knowles, senior director of Technology Programs
You could imagine in ten years an absolutely fantastical, extremely powerful tool assisting you in every aspect of daily life and essentially knowing what you want at all times. But my greater concern, if that is the future, is then who will have access to this tool? Because for AI [tools] to be this capable, they will be immensely energy intensive. They will be extremely expensive. And the current moment we’re in now—in which there’s been a real focus on making AI as accessible to as many people as possible—I wonder how much longer that will last and if we might create these exquisite systems but have them accessible only to a very few people.
— Emerson Brooking, director of strategy and resident senior fellow at the ’s Digital Forensic Research Lab
We’re seeing a lot of attention today on building what are called “world models.” Instead of predicting the next word, these models are predicting the next action in the world. If we’re able to move in that direction, then we’re really going to see the true impact of AI across society by breaking AI out of this computer interface into robotics that can take on more tasks.
— Tess deBlanc-Knowles, senior director of Technology Programs
What is possible is that in the future, we’re going to see a larger application of small language models that are built for specific purposes and have that contextual knowledge, or are hooked up to a very relevant database, so that when you log in, you’re logging into a geopolitical chatbot as opposed to a general purpose tool. [There is] a much higher likelihood that it’ll be able to give you good answers. We’re a ways off from that, though.
— Trey Herr, senior director of the ’s Cyber Statecraft Initiative
Any predictions for how AI will change over the next year specifically?
One of the trends I would look out for in 2026 is countries going all in on sovereign AI. The principle driving this trend toward sovereign AI is quite simple: It’s governments saying we need to control AI before it controls us. Now, what is sovereign AI? It is a model of AI development, driven by four characteristics: One, adherence to national laws. The second is national security. The third is economic competitiveness, where there’s a desire for the development and deployment of these models to benefit the home economies. And then the fourth and most interesting one is value alignment—the belief that these models have to adhere to a certain set of ideological and constitutional rules. But here’s a not-so-well-kept secret: It is not possible for a country to build the entire AI stack indigenously.
— Trisha Ray, associate director and resident fellow at the ’s GeoTech Center
There are two trends we should look out for. The first [is] indicators of the continuing sophistication of these tools. In particular I would focus on the context window—the amount of information that [these tools] have direct access to at any one time. When ChatGPT launched, the context window was about 4,000 characters, not very much. A year later, it was 100,000. Today some of the most popular consumer-grade models have a context window of up to 2 million. That is still a drop in the bucket next to the information that these machines will need to actively hold in order to be truly revolutionary and effective. If we can start to see somehow, through maybe some clever engineering, exponential growth in that context window, then we might actually be on a path to something we describe as artificial general intelligence.
The other [trend involves] the financing, political conditions, and even the actual energy costs that are associated with these systems. As these elements begin to shift, some of the AI companies that so far have been able to have these continuing rounds of open financing with ever-higher valuations, if they start to reach some sort of ceiling—that will start to send shocks through this whole system, which may affect AI development in a very different way.
— Emerson Brooking, director of strategy and resident senior fellow at the ’s Digital Forensic Research Lab
How revolutionary is AI?
AI is changing the way that we interact with things that we touch every single day. In ten years, I think that you will see more AI in commercial landscapes, in security landscapes, or even in warfare. I think it’s highly likely that we achieve general artificial intelligence. That doesn’t necessarily mean that killer robots are going to govern all of us.
— Graham Brookie, vice president for technology programs and strategy
What we’re seeing is one of the most significant changes in digital technology easily in the last fifty years, probably since the creation of the personal computer. Before the PC, you had to go somewhere to an institution and ask for time on a computer. And then suddenly with personal computers, you had them in your office, in your living room. You didn’t need an institution. It just inverted the relationship and inverted a huge power dynamic. AI has done the same thing. It’s put the ability to do complex research and production of knowledge into every single person’s hands.
— Trey Herr, senior director of the ’s Cyber Statecraft Initiative
Artificial intelligence, the way we use it now, is not transformational yet. I would say AI is more a continuation of the digital revolution. It’s exciting for sure, but not society-shaking as of yet. If we think about the industrial revolution, it changed the way we live, changed the way we work, and even changed our politics. It shifted the nexus of economic growth from farms to cities. And if we just reflect on the role that AI plays in our economy right now, it is not at the stage to be called an AI revolution. Yet.
— Trisha Ray, associate director and resident fellow at the ’s GeoTech Center
Will AI replace humans?
Humans can think. Generative artificial intelligence models can’t think. That’s a really, really crucial distinction. It’s easy to anthropomorphize something that will chat with you.
— Trey Herr, senior director of the ’s Cyber Statecraft Initiative
Humans understand context. They understand cause and effect. Humans also have creativity to think through different scenarios that might not be present in prior events, where an AI system is not going to be able to creatively think of a new event.
— Tess deBlanc-Knowles, senior director of Technology Programs
As more and more time passes and the use of these tools becomes normalized, it may be that AI is never all that good at predicting the future, but that it feels good enough at doing it—and predicting the future is so hard anyway—that we turn that task over to AI; that human beings, with our extraordinarily capable and irreplaceable brains, give up on some of that higher-order thinking and try to let these machines do more and more of the job. And no matter how capable AI becomes, I see that as a tragedy.
We could reach a really strange point where people are using these tools and basically relying on them to tell them what to do and how to live their lives, and they’ve essentially outsourced a lot of higher-order and critical thinking to these tools, having forgotten or having never known that no matter how omniscient these tools seem to be, they themselves are creations from limited human-created data sets and human processes designed in a particular moment in time.
We could find ourselves trapped in some kind of recursive loop where the future and the horizon of possibilities keeps getting narrower and narrower, because that is what the machine is telling us is possible—that machine that was only trained on what humans knew to be possible.
— Emerson Brooking, director of strategy and resident senior fellow at the ’s Digital Forensic Research Lab
Oops…
It looks like we’re having a technical issue.
If refreshing the page doesn’t resolve the issue you could try clearing the sites browser cache.
