I make a living listening to people who build and fund cloud software. I’m not pitching expertise. I’m offering an intuitive read from repeated observations. Most days I’m with CIOs, CTOs, product leads, founders, and the teams who keep things running. I help them prep AWS talks, write what they know, and say it plainly. If you’re thinking, “who is this guy?”—fair. I’m not a guru; I’m a listener. I sit in reviews, renewals, roadmap calls, and investor meetings, and I connect the dots.
In March 2025, Anthropic CEO Dario Amodei predicted that within a year, AI may be writing “essentially all of the code” in our software.
At the same time, AWS’s growth was 19%—below the boom years and below investor hopes, but up from ~13% in 2023, raising eyebrows with investors. Taken together, these developments signal a tectonic shift:
Artificial intelligence is on track to replace the human-driven model of cloud computing – potentially triggering a burst in the AWS, Microsoft Azure, and Google Cloud monopolies.
In other words, what I am hearing is that AI won’t just generate applications, it will soon provision, deploy, and run them autonomously, undermining the very model that made “Big Cloud” indispensable.
The Inflection Point
The year 2024 marked a clear turning point at the intersection of AI and cloud computing. Generative AI has exploded into mainstream use—from coding assistants to chatbot agents—at the same time that cloud providers are… well, aging. Maturing. Hitting that awkward phase where you wonder if they’ve peaked. Leading AI systems can now build and debug software faster than human engineers. This shift is fundamentally rewriting workflows. Amodei isn’t alone in his outlook. Many tech leaders are now projecting that: → Most code will be written by AI by 2025. That kind of prediction does something weird to your coffee. It also forces the next logical question:
If AI can write and test code… why can’t it deploy and run it too?
That question cuts deep into the traditional DevOps model. For the last decade, companies have leaned on squads of developers and cloud engineers to:
- Write software
- Set up and scale environments
- Keep applications running across AWS, Azure, etc.
But here’s what’s happening now: → AI has taken over part one: code creation → And it’s creeping into part two: infrastructure ops
In mid-2024, longtime DevOps pioneer John Willis put it like this:
“We can issue voice commands like [Star Trek’s] Scotty and automate complex tasks… it’s a glimpse into a future where AI streamlines and simplifies workflows.”
– John Willis
(Yes. The future of ops might talk back.) The reason this is even possible? AI can now:
- Understand natural language
- Parse system context
- Translate vague business requests into concrete infra-level actions
We’re no longer dreaming about AI agents that build, deploy, and troubleshoot code. They’re here. They’re working. And they’re doing the jobs cloud operations teams once did—minus the coffee breaks. So here’s the bigger picture. This convergence—AI mastering both the creation and the deployment of software—isn’t just another trend. It’s forming the backbone of a thesis quietly spreading through tech circles:
Cloud computing as we know it is about to be disrupted. Or already has been.
The idea is simple. → Once AI can both write and run software → The logic of cloud platforms as human-centric toolsets starts to break down Yes, cloud contracts still dominate enterprise IT. Yes, hyperscalers continue pushing out new services like clockwork. But behind the scenes, a silent revolution is already underway. The very same AI that builds your app can now increasingly: → Deploy it → Monitor it → Scale it → Fix it No ticket queues. No YAML meltdowns. No “who broke prod?” slacks. That’s the inflection: The center of gravity is shifting… From human-driven cloud tooling to AI-driven infrastructure automation. And if you blink, you might miss the handoff.
Why the Traditional Cloud Model is Vulnerable
If Big Cloud’s dominance was built on providing tools for humans to manage systems, then that advantage is starting to wobble a bit in this new AI-first era.
Already, cracks are showing.
→ Growth in cloud revenue is slowing → Investor confidence? Also slowing
Case in point:
- In Q4 2024, Amazon’s AWS unit grew just 19% year-over-year
- That’s its slowest growth rate ever
- It missed analyst expectations across the board
Wall Street took one look and panicked. Amazon lost nearly $90 billion in market value—in a single day. (Yes, billion with a “b.”) Why? The cloud numbers were soft. And the outlook? Cautious. Not the vibe investors wanted.
Daniel Morgan, a portfolio manager at Synovus Trust, summed it up:
“After very strong third-quarter numbers, this quarter the growth rates all missed. That’s what the market doesn’t want to hear.” – Daniel Morgan
And he wasn’t just talking about AWS. AWS growth disappointed and Google Cloud’s growth remained elevated but margin-pressured, while Azure re-accelerated on AI workloads signaling:
- Slower growth
- Higher costs
- Bigger spending on AI infrastructure (which, let’s be honest, hasn’t quite paid off yet)
So yes, they’re investing heavily in AI to stay relevant. But for now, it’s mostly expense, not upside.
And when the growth story falters—even slightly—people start asking bigger questions. Like: What if AI doesn’t just live in the cloud… what if it competes with it?
Behind those headline numbers is a broader trend: businesses are rethinking the cloud cost model. More leaders are tightening cloud budgets; fully one-third of IT executives say cost control is now their most critical focus, up eight percentage points from the prior year. For years, cloud providers enjoyed surging demand as companies rushed to migrate and scale up. But recently, high cloud bills and mixed ROI have prompted some firms to “repatriate” workloads back on-premise or seek alternatives.
A prominent example is 37signals, maker of Basecamp and Hey. The company’s CTO, David Heinemeier Hansson, publicly detailed how moving off AWS saved them about $1.9 million in 2023 alone. “When we move out [of AWS S3] next summer, … ourtotal projected savings from the combined cloud exit will be well over $10 million over five years,” Hansson wrote in late 2024.
Cloud providers also face the reality that much of their revenue comes from “value-add” services – managed databases, monitoring tools, analytic engines – layered atop basic compute and storage. If AI can handle monitoring, scaling, and healing of systems internally, companies might question why they’re paying extra for cloud monitoring or APM tools. In effect, AI threatens to commoditize the cloud. “DevOps is one of the most compelling use cases of generative AI,” says Jon Jones, AWS’s Global Head for Startups. AWS itself acknowledges that startups can streamline operations with AI, which is exactly what erodes the need to buy many cloud-managed services.
Finally, consider the immense capital expenditures the cloud giants are pouring into AI. According to McKinsey, spending on AI-specific infrastructure could reach a staggering $6.7 trillion by 2030. In 2024, Amazon/Microsoft/Alphabet together spent ~$175B; guidance and AI build-outs imply much higher totals in 2025, much of it on AI chips and data center builds. This bet is raising eyebrows on Wall Street. Investors are impatient for returns on these multibillion-dollar outlays.
In short, Big Cloud’s growth is slowing, its customers are eyeing costs, and AI is changing the calculus of IT.
The arrival of autonomous agent frameworks like OpenAI’s recently announced “ChatGPT agent.” Unveiled in July 2025, this agent enables ChatGPT to perform multi-step tasks using tools and plugins.
For example, it can plan an entire project, write the code, and then use a connected cloud API to deploy that code – all without direct human instruction at each step. Large firms from Microsoft to SAP are pouring billions into such AI agents to “boost productivity and make operations more cost efficient,” as Reuters reported. The vision is that an AI agent could serve as an autonomous cloud engineer, provisioning servers, configuring networks, deploying updates, monitoring performance, and troubleshooting issues continuously.
“With agents comes a shift to service-as-a-software. This is a tectonic change in how software is built, deployed and operated.”
– Swami Sivasubramanian
That quote from AWS’s New York Summit underscores how even the cloud leaders see the writing on the wall. In an “AI-native” cloud, many layers of human intervention simply disappear. The AI that builds an application can also deploy it on an optimal infrastructure (possibly designing the infrastructure on the fly), monitor it in real time, and continually improve it.
We are early in this transition, but the pieces are rapidly falling into place. Today it’s a mix of semi-autonomous tools and human oversight. Tomorrow’s picture, however, looks radically different: a self-driving cloud where humans simply define high-level goals and AI systems handle the rest. Companies like Amazon hope you’ll run those AI systems on their cloud, but what if the AI doesn’t need all the proprietary cloud services anymore? That scenario leads to far-reaching consequences.
MORE FOR YOU
- AI’s Disruption, the Decline of Middle Management, and What the U.S. Must Do Next: A Q&A With Par Chadha
- Edwin Poot of Thredd on Agentic AI, Governance, and Building the Infrastructure of Trust
- Why Board Members Should Be Paying Attention to These Critical Metrics
- AI is More Human Than You Think – And It Can Vastly Improve The Job-Seeking Experience
The Timeline to Collapse
How might this transition play out year by year? Here’s a forecast from 2024 through 2030 based on current signals and the trajectory of AI adoption:
- 2024 – Early SignsAI is widely used for code generation and testing this year. Major DevOps teams begin integrating AI copilots into their CI/CD pipelines. We see early AI-managed deployments in shadow IT projects– for example, a skunkworks team at a Fortune 500 uses an LLM agent to deploy a microservice without involving the central cloud team. Cloud providers continue healthy growth but note more customers optimizing usage. Forward-thinking companies start pilots of AI ops (often internally termed “Automation 2.0”). By the end of 2024, it’s accepted that “AI + DevOps” is a key enterprise theme, though full autonomy is rare.
- 2025 – Mainstream ExperimentationGenerative AI is now commonplace in development. A majority of new software projects involve AI in coding, as predicted. Some companies begin experimental AI-managed environments for specific workloads (e.g. using an AI to run a dev/test environment entirely). Cloud growth continues to slow, especially on ancillary services – there are public reports of companies spending less on things like monitoring tools because their AI does it internally. We also see the first instances of cloud providers lowering prices or offering steep discounts on value-add services to entice customers to stay. By late 2025, stories of AI-run deployments (with only oversight from humans) start appearing at conferences.
- 2026 – Early Adopters Move PipelinesA notable minority of enterprises (perhaps tech-forward firms, fintechs, etc.) migrate whole development pipelines to AI-native platforms. This might involve using a service that takes application source code and desired outcomes, then automatically handles cloud setup and deployment. Traditional cloud providers begin to feel pricing pressure on high-margin services. For the first time, AWS/Azure/Google face competitive bids not from each other, but from new AI-cloud startups that offer to host and run client applications at a fraction of the cost, leveraging autonomous optimization. We also observe some large cloud customers negotiating contracts that focus on bare metal or raw GPU leasing, as they plan to let their AI handle the software layer. Indicators of the shift: AWS’s annual re:Invent conference in 2026 has an entire track dedicated to AI-managed services (implicitly encouraging customers to use AWS as the base for AI ops, rather than leave).
- 2027–2028 – New Entrants and Cloud as CommodityBy 2027, full-scale AI-managed infrastructure deployments are happening at scale. A few “AI-native cloud competitors” emerge – companies that might not call themselves cloud providers, but effectively offer a one-stop AI-run environment. For instance, an “OpenAI Cloud” could appear, where a business gives OpenAI’s agent their code and data and it runs everything (with Azure or some partner in the background). Meanwhile, big cloud providers are forced into commodity pricing battles on core compute and storage. The profit from things like AWS Lambda, DynamoDB, etc., dwindles because AI platforms package cheaper or open-source equivalents. We likely see market consolidation: perhaps one of the Big Three clouds acquires or merges with a major AI middleware company, acknowledging that the standalone cloud model is waning. Cloud market share begins to shift – those that adapt (offering better AI integration, custom silicon, etc.) might tread water, while others lose out. Human DevOps and cloud support roles see significant layoffs by 2028 as companies trust automated systems more; this makes headlines and stokes debate about AI displacing jobs.
- 2029–2030 – Collapse and RealignmentToward the end of the decade, the cloud giants in their current form face a steep decline in influence. Legacy cloud platforms start seeing outright drops in revenue or market share as AI-managed solutions overtake them for a growing number of workloads. It’s possible one of the big providers pivots away from cloud – for example, an AWS spin-off focusing solely on AI services, or Google Cloud rebranding around AI & data exclusively. Traditional enterprise IT conferences dwindle as the conversation shifts to AI strategy and foundation models. We might even witness alliances such as AWS partnering with former competitors (telecoms? hardware makers?) to stay relevant as mere infrastructure providers. By 2030, the notion of a “cloud provider” has blurred – many former customers now view them like utilities you plug an AI into, and some have left entirely for on-prem AI hubs. Importantly, a significant realignment occurs: companies that led in foundation models essentially become the new gatekeepers of computing. The stock market recognizes this by awarding higher valuations to AI platform companies (some of which were startups in the early 2020s) than to the erstwhile cloud giants. Cloud isn’t gone, but it’s no longer the center of the tech universe – it’s the plumbing, largely invisible, while the AI layer captures the imagination and the profits.
This timeline is aggressive, to be sure, but not implausible given the exponential progress in AI. Each year builds on the last as technical barriers fall and comfort with AI operations grows. By 2030, calling it the “collapse” of Big Cloud may be an exaggeration – they won’t vanish overnight – but it will certainly be a comprehensive changing of the guard.
Trend: Projected cloud infrastructure spend vs. AI infrastructure spend through 2030. Cloud growth plateaus while investment in AI-native infrastructure skyrockets, reflecting the shift in value.
Resistance and Denial Within the Industry
Despite these rapid shifts, there is plenty of resistance and skepticism within the industry. Many cloud proponents and DevOps professionals currently insist that AI isn’t ready to handle full infrastructure responsibilities. They point out, not unfairly, that today’s AI models can hallucinate or make incorrect decisions, which is dangerous for mission-critical systems. A senior DevOps engineer on one forum argued that AI lacks the advanced reasoning to truly replace human ops, especially when it comes to understanding unique environment quirks or inferring business context for incidents. “It’s not enough to just know how to write a Terraform script – most times these scripts need environment-specific parameters that an AI would have no clue about,” he wrote in mid-2024, echoing a common sentiment among practitioners. There is also a cultural aspect: operations teams take pride in their expertise, and some see the AI narrative as hype or even an affront to their craft.
On the vendor side, cloud executives publicly remain upbeat that AI will augment rather than replace their services. They emphasize how AI can help customers use their cloud more efficiently (a bit of a double-edged sword). At AWS re:Invent and Microsoft Ignite events in 2023 and 2024, the messaging was that “cloud and AI go hand-in-hand” – i.e., the cloud is still essential because it provides the massive compute power needed for AI. Satya Nadella, Microsoft’s CEO, often described Azure as “the world’s computer” powering the AI era, implicitly reinforcing that even AI needs a platform to run on. This is true in the short term – large models require big iron – but it sidesteps the question of who orchestrates that power. The cloud incumbents tend to frame AI as just another workload on their clouds (like big data or IoT was), rather than a fundamental shift that could bypass them. This could be a dangerous form of denial if they fail to plan for a scenario where they supply commodity compute while someone else owns the customer relationship through AI.
Internally, there are reports of tension. An engineer at a major cloud provider (speaking anonymously in an Insider piece) said that suggestions to build more autonomous services sometimes met with reluctance from product managers whose KPIs were tied to usage of existing (manual) services. It’s a classic innovator’s dilemma scenario playing out in real time. We saw a hint of this when Google’s Cloud CEO Thomas Kurian in 2024 downplayed the threat of customers leaving for on-prem, even as Google invested heavily in AI-assisted cloud tools. The official line was that AI will drive more cloud consumption (because of the computational demands), which is true in the near term – training GPT-4 or running large inference workloads definitely boosted cloud revenues in 2023–2025. But that short-term conflation of AI’s rise with cloud strength may lead cloud leaders to misjudge the long-term trend. There is a bit of a “Kodak effect”, where they see the new technology but interpret it in a way that doesn’t threaten their core.
We also have outright denial in some developer communities. A segment of engineers simply does not believe an AI could reliably manage systems at the level of a seasoned SRE (Site Reliability Engineer). They point to the many tacit knowledge elements in ops: the war stories, the tribal knowledge of “this one server tends to fail if X happens,” or the judgment calls during an outage. These skeptics have a motto: “AI can’t page itself at 2 AM – and even if it could, would you trust it to fix what’s wrong?” For now, many organizations answer “not yet.” Trust is a huge barrier – one doesn’t hand over the keys to production lightly. Thus, we see resistance in the form of deliberate slow adoption: companies might use AI suggestions for ops but still require a human to approve changes, for example.
However, the resistance might melt faster than anticipated. Notably, younger companies and new developers seem more open to AI tools (they have less legacy process to unlearn). And some industry voices are warning peers not to be complacent. “I think people still aren’t recognizing the effect AI could have on their lives and livelihoods,” Anthropic’s Dario Amodei said in early 2025. “People will wake up to [it] … to a much more extreme extent over the next two years,” he cautioned. In other words, denial can flip to acceptance (or shock) very quickly when results start to outpace skepticism. We saw a parallel in the 2010s with cloud adoption itself – many IT admins once said they’d never trust the public cloud with sensitive workloads, until suddenly they did and it became normal.
Inside the cloud giants, there’s likely a mix of public optimism and private anxiety. A striking data point: AWS’s layoffs in 2025 included hundreds from older enterprise support divisions, even as they hired in AI units. It suggests an internal recognition that roles must change. But don’t expect any cloud CEO to get on stage and herald the end of their own golden goose. If anything, they will double down on messaging that they are the best place to run AI (which in the near term is true for large models). The real question is whether they pivot their business models in time.
To sum up, cultural and organizational inertia are perhaps the last lines of defense for the traditional cloud model. As long as CIOs are wary of “too much automation” or developers feel safer with a human in the loop, the collapse can be delayed. But if the technology keeps proving itself – and cost pressures keep rising – resistance may quickly turn into a rush to the exits, catching the laggards off guard.
What Comes After
If the Big Cloud era as we know it is nearing its end, what comes next? Who are the likely winners and how will the enterprise infrastructure landscape be reshaped? There are a few plausible scenarios (not mutually exclusive):
1. Foundation Model Owners Become the New Cloud: Companies that build and maintain foundation models (the brains of AI) could leverage that position to absorb the value that cloud platforms once had. OpenAI, for instance, in partnership with Microsoft, might evolve into a full-stack solution where businesses entrust them with not only AI insights but the execution of tasks. We see hints of this: OpenAI’s hosting of ChatGPT functions on Azure blurred the line between application and infrastructure. Down the road, OpenAI could offer “AI Platform” deals directly to enterprises – you give us your data and goals, our AI does the rest – effectively bypassing Azure as a distinct brand (even if Azure hardware is underneath). Anthropic, with backing from Amazon and Google, could similarly become an orchestrator. Even Google itself could pivot: Google Cloud might transform into a Gemini AI Cloud, where Google’s own LLM (Gemini) is the star, and the underlying compute is de-emphasized. In these futures, the battle is between AI ecosystems(OpenAI vs Google vs Meta’s open models, etc.), not classical cloud feature checklists. Customers choose an AI partner more than an infrastructure provider. The “cloud” in this context becomes analogous to the electrical grid – it’s there, it’s necessary, but not a differentiator.
2. Decentralized and Edge Infrastructure: It’s possible that instead of a few big data center operators, we move to a more decentralized model of compute provision. If AI agents can coordinate complex tasks, they might be able to knit together resources from many locations. Consider projects like Stanford’s DAO for compute or blockchain-based compute marketplaces (which have existed in nascent form – e.g., Golem network for CPU power). To date, these haven’t threatened AWS in any serious way because managing distributed nodes was too complex for most users. But an AI that can manage distribution and fault tolerance might make it viable. We could see decentralized cloud networks where many small players (even individuals with powerful rigs) contribute capacity that AI dynamically allocates. Similarly, edge computing (devices and on-prem servers at the edge of the network) could get a boost. If an AI can decide to run certain workloads closer to users (for latency or privacy) and others in central zones, it could seamlessly shift work to the edge when beneficial. This scenario might not topple the cloud giants entirely, but it could chip away at certain sectors (for example, IoT or content delivery might move to more local AI-managed clusters rather than central clouds).
3. Hybrid Models & New Hybrids: In all likelihood, the future will be a mix – a hybrid of old and new. The current clouds will try to reinvent themselves, possibly through acquisitions or partnerships. One could imagine AWS evolving to primarily offer AI-friendly bare-metal and specialized silicon (like its Trainium and Inferentia chips) at cut-rate prices, essentially saying “we are the cheapest place for your AI to run, even if your AI is in charge.” Microsoft might double down on an integrated approach (with GitHub, Azure, and OpenAI all tightly woven) to retain customers in its ecosystem. Also expect mergers between AI startups and traditional IT firms: perhaps an enterprise software giant like Oracle or IBM (which have a lot of enterprise data access) merging with an AI leader to form a new kind of full-stack company.
Meanwhile, entirely new types of companies could emerge. Imagine a “MetaCloud AI” company that doesn’t own data centers at all, but acts as a broker – its AI finds the best spot to run your workload at any moment (one hour it’s on Google, the next on AWS, then on a private cluster), optimizing for cost and efficiency, and charging you a subscription for the service. This sort of cloud arbitrage, managed by AI, would turn infrastructure into a true commodity marketplace. The broker AI captures value by saving you money and automating the complexity. We already see rudimentary versions in multi-cloud management platforms; add AI and you have a powerful intermediary that weakens loyalty to any single cloud.
Another aspect of “what comes after” is the enterprise IT stack is redefined. The classic stack (applications -> middleware -> OS -> hardware -> data center) could compress. If AI handles what middleware and OS and even some application logic used to do, the stack might simplify to just “AI layer -> generic hardware.” CIOs of the future might think less in terms of choosing databases, monitoring systems, and so on, and more in terms of training or configuring their AI operator with the right knowledge. The skills companies need will shift too: prompt engineering, AI policy setting, model oversight – these could replace traditional cloud certifications. As one CTO quipped, “Maybe our next ‘cloud architect’ hire will actually be a prompt architect for the AI that runs our cloud.”
It’s also worth considering who might thrive in this new order. Hardware manufacturers like Nvidia, AMD, and emerging AI chip makers have a strong hand – they supply the picks and shovels of the AI gold rush. There’s speculation that Nvidia could even offer more cloud-like services (it already has Nvidia DGX Cloud, which partners with core data centers to rent AI supercomputing). If the likes of Nvidia or even large enterprises themselves (say, Apple or Tesla building internal AI compute farms) decide to enter the cloud market in a new way, they might bypass traditional models and deliver AI-optimized infrastructure directly.
Finally, open-source communities might play a big role. Just as Linux eroded proprietary UNIX and became the standard OS, we might see an open-source AI orchestrator become the default “cloud brain” that everyone uses (instead of each cloud having its own). Projects are already underway to open-source LLMs and even autonomous agent frameworks. If successful, this could democratize the AI-cloud capability and prevent any one entity from totally dominating. In that scenario, the winners are those who can best implement and support these open tools – potentially a new breed of service companies (the Red Hats of AI, so to speak).
In all these possibilities, what’s clear is the center of gravity shifts upward. Those who control the intelligent layer – whether it’s OpenAI, a consortium, or some new player – will call the shots. The current cloud providers will either adapt to serve that layer (becoming more like utilities or specialized providers) or see themselves eclipsed by it. As Tom Krazit reported, “this shift represents perhaps the greatest threat to [our] perch atop the enterprise world since [AWS] grabbed that spot a decade ago.”
In summary, after the dust settles, we’ll likely still have big companies providing computing power, but the basis of competition will be radically different. We might talk about “Which AI platform are you building on?” the way we used to ask “Which cloud are you on?” The faces at the top of the industry could change – tomorrow’s tech titans might be names that are only just emerging now. And the very notion of cloud computing may evolve from thousands of humans clicking in consoles to millions of automated decisions per second made by algorithms, behind the scenes, quietly running the world.
Conclusion (Implications + Outlook)
The coming collapse of Big Cloud is not a literal overnight implosion, but a metaphor for an irreversible power shift. The inevitability of this AI-driven transformation carries profound implications for CIOs, CTOs, and tech leaders everywhere. It means that strategies anchored in yesterday’s assumptions – infinite developer headcount, ever-expanding cloud service menus, and human-paced release cycles – will need a top-to-bottom rethink. In the new paradigm, agility comes from how well you can leverage AI to do heavy lifting, not how many engineers or cloud contracts you have. CIOs should begin investing in AI competency within ops teams, even if it means redefining roles (turning system admins into AI supervisors, for example). There will be tough organizational culture shifts as well, as teams used to doing things manually adapt to a supervisory and training role for AI. Companies that embrace an AI-first infrastructure ethos stand to gain agility and cost advantages that late adopters will find hard to match – we could see widening competitive gaps in every industry, driven by tech efficiency.
For the cloud providers, the outlook demands evolution. It’s likely we’ll see them increasingly tout “co-pilot” features for their platforms – basically acknowledging that every cloud user will have an AI assistant (or several) helping them. We may also see new pricing models as the value of some services diminishes. By 2030, the enterprise tech stack might be unrecognizable: perhaps a slim core of ultra-optimized compute providers (some of today’s clouds among them) and on top, a vibrant ecosystem of AI orchestration services that handle everything else.
One forward-looking signal is the talent market. Job postings for “AI operations” or “Prompt engineer for IT automation” are appearing, and venture capital is flowing into startups that promise to reduce cloud costs via AI automation. In a sense, the free market is already betting on this shift. Enterprise software vendors like ServiceNow and DataDog are rapidly adding AI features to avoid being leapfrogged – an indicator that every layer of the stack is adjusting.
Ultimately, this redefines the enterprise stack from bottom to top. Compute becomes utility; cloud becomes meta-cloud (a resource pool); and AI becomes the new application platform and decision-maker. The enterprise of the 2030s might run on a self-managing “autonomous stack” where business logic, infrastructure logic, and optimization logic are all entwined in an AI that continuously learns and adapts. That raises new challenges – from trust and ethics to compliance (who audits an AI ops system?) – but those will be the new frontier issues, replacing today’s cloud architecture discussions.
For now, tech leaders should view these changes not as a threat but as an opportunity to leapfrog competitors. Just as early cloud adopters in the 2010s outpaced those stuck on-prem, early AI-cloud adopters in the 2020s will outpace those stuck in manual mode. The message is clear: prepare now. Experiment with AI in operations, start small but think big. Because once the tipping point is reached, the shift will happen faster than organizations can react. As one industry CEO quipped, “There’s no going back.” The era of human-led cloud operations is winding down; the era of AI-managed infrastructure is dawning. Those who embrace it will ride the next great wave of productivity – those who don’t may find themselves watching from the sidelines as the cloud’s center of gravity moves beyond them, up the stack and into the realm of intelligent automation.
In the end, the enterprise technology stack – and the giants who dominate it – are being reborn above the cloud, in the mind of the machine. The companies that recognize that shift will lead the future, and those that don’t will, as the thesis says, collapse or realign. The cloud isn’t disappearing; it’s evolving. And it’s up to today’s tech leaders to ensure they evolve with it, not after it.
Quotes & Sourcing
- Dario Amodei (CEO, Anthropic) – “I think we will be there in three to six months, where AI is writing 90% of the code… in 12 months, AI [will be] writing essentially all of the code.” (Council on Foreign Relations event, Mar 2025)
- Daniel Morgan (Sr. Portfolio Manager, Synovus Trust) – “After very strong third-quarter numbers, this quarter the growth rates all missed. That’s what the market doesn’t want to hear.” (on cloud earnings, Feb 2025)
- John Willis (DevOps pioneer) – “I imagine a future where we can issue voice commands like Scotty on Star Trek and automate complex tasks… These tools provide a glimpse into a future where AI streamlines and simplifies workflows.” (reflecting on AI assistant Clio, Aug 2024)
- Venkat Thiruvengadam (CEO, DuploCloud) – “By deepening our collaboration with AWS, we’re taking another major step in helping fast-growing companies deploy cloud workloads securely and efficiently… customers focus on innovating while we handle the complexity of DevOps.” (press release, Jul 2025)
- Jon Jones (VP Startups, AWS) – “DevOps is one of the most compelling use cases of generative AI by startups, helping them streamline security and compliance while building their solutions.” (on AWS–DuploCloud partnership, Jul 2025)
- David Heinemeier Hansson (CTO, 37signals) – “…Our total projected savings from the combined cloud exit [is] well over $10 million over five years! While getting faster computers and much more storage.” (LinkedIn post, Oct 2024)
- Swami Sivasubramanian (VP, AWS AI) – “With agents comes a shift to service as a software. This is a tectonic change in how software is built, deployed and operated.” (AWS Summit New York, Jul 2025)
- Dario Amodei (CEO, Anthropic) – “I think people will wake up to both the risks and the benefits [of AI] to a much more extreme extent… over the next two years.” (NYTimes Hard Fork podcast, Feb 2025)
- Matt Garman (SVP, AWS) – (On AWS being late to generative AI in 2022) “We didn’t have some of the whiz-bangy things that you could get out there quickly.” (Fortune interview, Jul 2025)
- Tom Krazit (Enterprise Tech Journalist) – “This shift represents perhaps the greatest threat to [AWS’s] perch atop the enterprise world since it grabbed that spot around a decade ago.” (Runtime Newsletter, Jul 2025)