We are all learning pretty quickly that the idea of the next big thing in artificial intelligence is not so much the supplanting of one model over another, but innovations that build on their predecessors along a capability spectrum. While the business world prepares this year to implement agentic AI, artificial general intelligence (AGI) looms in the background as the ultimate destination. Touted as the holy grail of intelligent machines, AGI would possess human-level or greater intelligence across virtually any cognitive discipline. Speculative timeframes for achieving AGI vary from two to 20 years, so is preparing now even possible?
Understanding the progression of AI along a continuum helps answer this question. Put simply, gen AI specializes in knowing things (parsing data and pattern recognition), AI agents will do things (autonomously learn and make decisions), but AGI will understand things, at least in a functional sense. That means not just learning and reasoning, but making connections across domains for novel solutions to the kinds of old, intractable problems that have proved resistant to human analysis.
It’s a huge leap with implications that, by definition, cannot be accurately predicted. For businesses, it might look like having advanced scenario planning that could identify market disruptions and inflection points early enough to change course. But rather than viewing AGI as something that will arrive at a certain point in the future, we can start mastering the building blocks of AGI that have already come on stream. By recognizing the growth of AI as a continuum, we will be better equipped to adapt to however this evolution unfolds.
What Exactly Is AGI?
There is no settled definition for AGI or a consensus for how we know it will have been achieved. As New Science has explained, the big players all have different standards: Google DeepMind will know AGI is live when it has outperformed all humans on a set of cognitive tests; Huawei when AI can independently interact with its environment; while Microsoft and OpenAI are targeting a model that will deliver $100 billion in profit.
New York Times tech columnist Kevin Roose admits he was a former skeptic of the development of AGI, reckoning it too pie-in-the-sky to be realized any time soon. However, we have already hit the point where scaling AI systems is yielding diminishing returns. As a result, developers are refocusing their energy on training models with better quality data, not more volume. After talking to engineers, researchers, and investors, Roose changed his mind. “I’ve come to believe that what’s happening in AI right now is bigger than most people understand,” he wrote.
Clarity may be further obscured by the conflicting predictions over the timeframe for “true” AGI (itself a debated concept). AIMultiple Research polled 8,600 experts and found a 50% probability of human-level machine intelligence between 2040 and 2061. Anthropic CEO Dario Amodei suggested it could happen as early as 2026. Meanwhile, a survey of 475 AI scientists revealed that 76% believed it was “unlikely” or “very unlikely” that scaling current approaches would succeed in achieving AGI
With all that in mind, Roose says we are past the tipping point: overpreparing for AGI is now low risk compared to the high risk of under-preparing. Unlike today’s AI systems that excel in narrow specialties but struggle when facing novel challenges outside their training data, AGI would have true flexibility, creativity, and problem-solving across domains, according to Gartner. It is incumbent on businesses to at least get a sense for the landscape.
The AGI Continuum: Emerging Capabilities
With OpenAI suggesting “superintelligence” is on its way, the gap between the leading edge of AI and enterprise use cases will only grow. For CIOs like Rockwell Automation’s Chris Nardecchia, businesses need to focus on the present by using gen AI for augmentation “rather than replacement—creating tools that help human teams make smarter, faster decisions.” In other words, get the existing technology right before fretting about what’s coming down the pipe.
The immediate task is to develop protocols that take advantage of agentic AI’s capacity to make autonomous decisions across domains while keeping humans in the loop. None of this is new. The guardrails that will serve businesses well now will form the basis for future iterations, including AGI, despite larger ethical questions remaining.
Insurer Liberty Mutual offers a case in point. The company’s internal version of ChatGPT has nine use cases that save 200,000 hours of human labor and generate about $100 million in savings, managing director Tony Marron told CIO. But this required integrating business and technology, and replacing the old human-to-machine model with human-AI partnership, over a number of years. The irony of the rapid-fire announcements of new AI capabilities is that gaining a reliable ROI on earlier-generation projects cannot be rushed. As ever, first principles still apply.
“Overpreparing for AGI is now low risk compared to the high risk of under-preparing.”
Three Points for Adaptability
Artificial intelligence is not a one-time event. Whether the road to AGI is short or long, there are three parts to what business leaders can do now to create greater adaptability to future change:
First, establish clear AI governance principles that keep humans in the loop alongside increasing capabilities. These guardrails should focus less on specific AI models and more on the functions they perform, whether generating content, making decisions, or eventually developing understanding across domains. Second, commit to a cross-functional approach tied to practical business use cases. With a focus on clean data and clear business objectives, organizations can avoid both overestimating current AI capabilities and underestimating future potential.
This leads to the final point: shift from viewing AI as tools to viewing AI as partners in decision-making processes. By seeing AI advancement as a continuum rather than a destination, businesses can create collaborative human-AI workflows that capture value at each step of AI’s evolution. AGI may mean different things to different people, but occupying the practical middle ground will ensure we remain prepared for whatever comes next.