Skip to content
Cloud providers comparison chart showing AWS, Azure, Google Cloud, Oracle, and alternatives

The cloud market in 2026: $515B annual run rate and growing 35% year over year

Last updated: May 2026. All pricing, market share, and revenue figures reflect Q1 2026 earnings reports and publicly available data. Prices shown are US regions unless noted.

The Cloud Market in 2026

Cloud infrastructure spending hit $128.6 billion in Q1 2026, putting the industry on a $515 billion annual run rate. That is 35% year-over-year growth, accelerating from 22% in 2024. The driver is obvious: every company on earth is racing to deploy AI workloads, and cloud providers are the ones selling the shovels.

The Big Three (AWS, Azure, Google Cloud) still control 67% of the market combined, but the story underneath is shifting. Google Cloud is growing at 63%, nearly double the rate of AWS. Oracle Cloud posted 84% growth. Cloudflare is quietly building a full-stack cloud platform from the edge inward. And a growing number of companies are moving workloads off the cloud entirely.

ProviderQ1 2026 RevenueMarket ShareYoY GrowthKey Trend
AWS$37.6B28%19%Custom silicon, Bedrock expansion
Microsoft Azure$34.7B*21%40%OpenAI integration, Copilot ecosystem
Google Cloud$20.0B14%63%AI/ML leadership, Wiz acquisition
Oracle Cloud$6.2B4%84%Multicloud database, AI infra
Alibaba Cloud$4.2B3%17%China market dominance
IBM Cloud$3.1B2%12%Hybrid cloud, Red Hat
Others$22.8B28%VariousFragmentation, sovereignty

*Azure revenue estimated from Microsoft Intelligent Cloud segment. Microsoft does not break out Azure separately.

The $515B annual run rate does not include SaaS revenue (Salesforce, Workday, etc.) or private cloud infrastructure. Including those, total cloud spending exceeds $800B. But this guide focuses on infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) providers where you deploy and run your own workloads.

Why 2026 is different: AI infrastructure spending now accounts for 30-40% of new cloud workloads. Every provider is racing to offer GPU clusters, model hosting, and inference APIs. The cloud wars are now AI wars, and the winners will be determined by who has the best silicon, the best models, and the lowest inference costs.

Amazon Web Services (AWS)

AWS remains the largest cloud provider by revenue and the default choice for most startups and enterprises. With $37.6 billion in Q1 2026 revenue and a 28% market share, AWS is the platform most developers learn first and most companies deploy to first.

But the narrative is shifting. AWS market share has eroded from 33% in 2022 to 28% in 2026. Azure and Google Cloud are growing faster. And the AI narrative, where the real growth is happening, has been dominated by Microsoft (via OpenAI) and Google (via Gemini and TPUs). AWS has responded aggressively with custom silicon and the Bedrock multi-model platform, but it is playing catch-up in the AI mindshare war.

AWS by the Numbers

MetricValue
Q1 2026 Revenue$37.6B
Market Share28%
Total Services402
Regions39
Availability Zones123
Planned Capex (2026)$200B
Custom Silicon Investment$20B+

Strengths

Broadest service catalog. 402 services is not a vanity metric. It means that whatever you need, from IoT device management to satellite ground stations to quantum computing simulators, AWS probably has a managed service for it. This breadth reduces the need for third-party tools and keeps everything under one billing umbrella.

Custom silicon is paying off. AWS has invested over $20 billion in custom chip development, and it shows. Graviton4 processors deliver 30% better price-performance than equivalent x86 instances. Trainium2 chips for AI training are 4x more energy-efficient than previous generations. Inferentia2 chips cut inference costs by up to 50% compared to GPU-based alternatives for supported models. This vertical integration gives AWS a structural cost advantage that competitors cannot easily replicate.

Serverless leadership. Lambda, Fargate, Aurora Serverless, DynamoDB, S3, SQS, EventBridge, Step Functions. AWS has the most complete serverless ecosystem. You can build entire production applications without managing a single server. The Lambda SnapStart feature has largely solved the cold start problem for Java workloads, and Lambda now supports response streaming for real-time AI inference.

Bedrock multi-model platform. Rather than betting on a single AI model, AWS built Bedrock as a marketplace. You get access to Claude (Anthropic), Llama (Meta), Mistral, Cohere, Stability AI, and Amazon's own Nova models through a single API. This lets you switch models without changing your application code, a significant advantage as the model landscape evolves rapidly.

Weaknesses

Market share erosion. Losing 5 percentage points of share in 4 years is significant. AWS is still growing in absolute terms, but Azure and Google Cloud are growing faster. Enterprise deals that would have defaulted to AWS in 2020 now go through competitive evaluation, and Microsoft's enterprise relationships give Azure an edge in those conversations.

AI narrative lagged. When ChatGPT launched in late 2022, Microsoft had the partnership. When Gemini launched, Google had the model. AWS had... Bedrock, which is excellent but does not generate the same excitement. Amazon's own Nova models are competitive on price but trail GPT-5.5 and Gemini 2.5 on benchmarks. In a market where AI is the primary growth driver, perception matters.

Pricing complexity. 402 services means 402 pricing pages. AWS billing is notoriously complex, with per-request charges, data transfer fees, cross-AZ costs, and NAT Gateway taxes that surprise even experienced teams. See our AWS cost optimization guide for specific strategies to manage this.

Watch out: AWS data transfer costs remain the highest among major providers. Egress from EC2 to the internet costs $0.09/GB after the first 100GB. At scale, this adds up fast. A workload transferring 100TB/month pays $9,000 in egress alone. Compare that to Oracle Cloud (10TB free) or Cloudflare R2 ($0 egress).

Key Services to Know

  • Graviton4 instances (M7g, C7g, R7g) - 30% better price-performance than x86
  • Trainium2 - Custom AI training chips, 4x energy efficiency improvement
  • Bedrock - Multi-model AI platform with Claude, Llama, Mistral, Nova
  • Lambda SnapStart - Sub-second cold starts for Java, now expanding to other runtimes
  • Aurora I/O-Optimized - Eliminates per-I/O charges for heavy database workloads
  • S3 Express One Zone - Single-digit millisecond latency for hot data
# Quick cost check: estimate monthly spend for a Graviton4 instance
aws pricing get-products \
  --service-code AmazonEC2 \
  --filters "Type=TERM_MATCH,Field=instanceType,Value=m7g.xlarge" \
             "Type=TERM_MATCH,Field=location,Value=US East (N. Virginia)" \
  --region us-east-1

Microsoft Azure

Azure is the fastest-growing hyperscaler by revenue and the cloud platform that enterprises choose when they already live in the Microsoft ecosystem. With $34.7 billion in Intelligent Cloud revenue for Q1 2026 and Azure-specific growth of 40%, Microsoft is closing the gap with AWS faster than anyone predicted.

The OpenAI partnership has been transformative. Microsoft invested $13 billion in OpenAI and integrated GPT models across every product: GitHub Copilot, Microsoft 365 Copilot, Azure OpenAI Service, Bing, and Dynamics 365. This created a flywheel where enterprises adopting Copilot tools naturally expand their Azure footprint. The AI run rate hit $37 billion annualized, with over 20 million Copilot seats sold across enterprise customers.

Azure by the Numbers

MetricValue
Q1 2026 Intelligent Cloud Revenue$34.7B
Market Share21%
Azure Growth Rate40% YoY
AI Annual Run Rate$37B
Copilot Seats20M+
Regions68

Strengths

Enterprise integration is unmatched. If your company uses Active Directory, Microsoft 365, Teams, Dynamics 365, or Power Platform, Azure is the path of least resistance. Single sign-on, unified billing, compliance inheritance, and deep integration between Azure services and Microsoft productivity tools create genuine switching costs that benefit existing Microsoft customers.

The OpenAI partnership. Azure OpenAI Service gives enterprises access to GPT-5.5, GPT-4o, DALL-E, and Whisper with enterprise-grade security, compliance, and SLAs that you cannot get from the OpenAI API directly. For regulated industries (healthcare, finance, government), this is the only way to use OpenAI models while meeting compliance requirements.

Copilot ecosystem. GitHub Copilot for code, Microsoft 365 Copilot for documents and email, Dynamics 365 Copilot for CRM, Power Platform Copilot for low-code apps. With 20M+ seats, Copilot is the largest enterprise AI deployment in the world. Each seat deepens the Azure relationship.

Hybrid cloud via Azure Arc. Arc lets you manage on-premises servers, Kubernetes clusters, and even AWS/GCP resources through the Azure control plane. For enterprises with significant on-premises infrastructure, this is a compelling bridge strategy that does not require a full cloud migration.

Weaknesses

Reliability concerns. Azure has had more high-profile outages than AWS or GCP in recent years. The July 2024 CrowdStrike incident (which was not Azure's fault but disproportionately affected Azure customers) and several Azure AD outages have made some enterprises nervous about single-vendor dependency on Microsoft for both productivity and infrastructure.

OpenAI exclusivity is eroding. OpenAI models are now available on Oracle Cloud, and OpenAI has signaled interest in broader distribution. If GPT models become available everywhere, Azure loses a key differentiator. Microsoft is hedging by investing in its own Phi models and partnering with Mistral, but the OpenAI moat is not as deep as it was in 2024.

Pricing transparency. Azure pricing is arguably even more complex than AWS. The Intelligent Cloud segment bundles Azure, Windows Server, SQL Server, GitHub, and other products, making it difficult to determine actual Azure IaaS revenue. Instance naming conventions (D-series, E-series, L-series) are less intuitive than AWS equivalents.

Azure's secret weapon: Enterprise Agreement (EA) discounts. Large enterprises with existing Microsoft EAs can negotiate Azure consumption commitments at 20-40% below list price. If your company already spends $1M+/year on Microsoft licenses, ask your Microsoft account team about Azure consumption credits bundled into your EA renewal.

Google Cloud Platform (GCP)

Google Cloud is the fastest-growing hyperscaler and the AI/ML leader. Q1 2026 revenue crossed $20 billion for the first time, representing 63% year-over-year growth. That is more than triple the growth rate of AWS. The $462 billion remaining performance obligation (backlog) signals that the acceleration is not slowing down.

The story is AI. Google Cloud's AI revenue grew 800% year-over-year, driven by Vertex AI, Gemini model hosting, and TPU (Tensor Processing Unit) infrastructure. The $32 billion acquisition of Wiz, the cloud security startup, signals Google's intent to compete seriously for enterprise security budgets. GCP is no longer just the platform for data engineers and ML researchers. It is becoming a full enterprise cloud.

GCP by the Numbers

MetricValue
Q1 2026 Revenue$20.0B
Market Share14%
YoY Growth63%
AI Revenue Growth800% YoY
Remaining Backlog$462B
Wiz Acquisition$32B
Regions41

Strengths

AI/ML leadership. Google invented the Transformer architecture that powers every modern LLM. TPU v5p chips deliver the best price-performance for training large models. Gemini 2.5 Pro is competitive with GPT-5.5 on most benchmarks and significantly cheaper. Vertex AI provides a unified platform for training, fine-tuning, deploying, and monitoring ML models. If AI is your primary workload, GCP is the strongest platform.

BigQuery remains unmatched. BigQuery is the best serverless data warehouse in the cloud. It handles petabyte-scale analytics with zero infrastructure management, automatic scaling, and a pricing model (on-demand or flat-rate) that works for both startups and enterprises. BigQuery ML lets you train models directly in SQL. BigQuery Omni runs queries across AWS and Azure data without moving it.

GKE is the gold standard for Kubernetes. Google created Kubernetes, and GKE shows it. Autopilot mode eliminates node management entirely. GKE costs less than EKS ($0/month for the control plane in Autopilot vs $73/month for EKS). Multi-cluster management, Anthos for hybrid deployments, and the best Kubernetes networking stack in the industry make GKE the choice for container-native teams.

The Wiz acquisition. Paying $32 billion for a cloud security company signals that Google is serious about enterprise sales. Wiz's cloud-native application protection platform (CNAPP) will be integrated into GCP's security stack, giving Google a competitive answer to AWS Security Hub and Microsoft Defender for Cloud.

Weaknesses

Smaller enterprise sales force. Google has historically been an engineering-led company that struggles with enterprise sales motions. AWS and Azure have armies of solution architects, partner networks, and enterprise account managers. Google has improved significantly, but the gap remains. If you need hand-holding through a complex migration, AWS and Azure offer more support resources.

Fewer services. GCP offers roughly 200 services compared to AWS's 402. For most workloads this does not matter, but if you need niche services (IoT device management, satellite ground stations, mainframe migration tools), AWS has more options. GCP tends to offer fewer but more opinionated services.

Shutdown reputation. Google's history of killing products (Google Reader, Google+, Stadia, etc.) makes some enterprises nervous about long-term commitment. GCP has never shut down a core infrastructure service, but the perception persists. Google has addressed this with explicit deprecation policies and long support windows, but the "Google kills everything" meme is hard to shake.

Best for: AI/ML workloads, data analytics (BigQuery), Kubernetes-native architectures, and teams that want the most advanced AI infrastructure. If you are building products powered by large language models, GCP's combination of TPUs, Vertex AI, and Gemini is hard to beat.

Oracle Cloud Infrastructure (OCI)

Oracle Cloud is the comeback story of the cloud wars. With 84% year-over-year growth and a staggering $553 billion backlog, OCI has gone from afterthought to serious contender. The strategy is simple and effective: meet customers where they are with multicloud database deployments, undercut hyperscalers on AI infrastructure pricing, and offer the most generous free tier in the industry.

Multicloud Database Strategy

Oracle's killer move was Oracle Database@AWS, Oracle Database@Azure, and Oracle Database@Google Cloud. These are not just database connections. They are Oracle Database instances running inside AWS, Azure, and GCP data centers with sub-millisecond latency to your other cloud services. You get Oracle's database engine with your preferred cloud's compute, networking, and tooling.

This is brilliant because it removes the biggest barrier to Oracle Cloud adoption: nobody wants to migrate their entire stack to a new cloud just to use Oracle Database. Now you do not have to. Run Oracle Autonomous Database inside your existing AWS VPC, and Oracle gets the database revenue without needing to win the full infrastructure deal.

AI Infrastructure

OCI's GPU instances with NVIDIA H100s are 58% cheaper than equivalent instances on AWS. Oracle achieved this by building bare-metal GPU clusters with RDMA networking optimized for AI training workloads. For companies training large models or running inference at scale, the cost difference is significant. A training job that costs $100K on AWS costs $42K on OCI.

Always Free Tier

OCI's Always Free tier is the most generous in the industry: 2 AMD compute instances, 4 ARM Ampere A1 cores with 24GB RAM, 200GB block storage, 10TB/month outbound data transfer, and an Autonomous Database instance. This is not a 12-month trial. It is free forever. For personal projects, development environments, and small production workloads, OCI's free tier is unbeatable.

OCI's egress advantage: 10TB of free outbound data transfer per month. AWS gives you 100GB. At $0.09/GB, that 10TB would cost $900/month on AWS. For data-heavy workloads, OCI's egress policy alone can justify the switch.

IBM Cloud

IBM is not trying to compete with AWS on breadth or Google on AI. Instead, IBM has carved out a defensible position in hybrid cloud for regulated industries. The $34 billion Red Hat acquisition in 2019 is finally paying off, with Red Hat OpenShift generating $2B+ in annual recurring revenue and growing steadily.

The Hybrid Cloud Pivot

IBM's strategy is built on the reality that most large enterprises will never go 100% public cloud. Banks, healthcare systems, government agencies, and manufacturers have workloads that must stay on-premises for regulatory, latency, or data sovereignty reasons. IBM Cloud Pak and OpenShift provide a consistent platform that runs identically on-premises, on IBM Cloud, and on AWS/Azure/GCP.

watsonx AI Platform

watsonx is IBM's enterprise AI platform, focused on governance, explainability, and compliance rather than raw model performance. For a bank that needs to explain every AI decision to regulators, watsonx's built-in bias detection, model lineage tracking, and audit trails are more valuable than having the fastest inference speed. The Granite model family is trained on curated, licensed data to reduce legal risk.

Mainframes Are Not Dead

IBM's mainframe revenue grew 48% year-over-year in the latest quarter. The z16 and upcoming z17 processors handle more transactions per second than any cloud instance. For banks processing millions of transactions per second, mainframes remain the most cost-effective option. IBM Cloud's hybrid approach lets these customers keep mainframe workloads while extending to cloud for new applications.

IBM Cloud is not for everyone. If you are a startup or a cloud-native company, IBM Cloud offers little advantage over AWS, Azure, or GCP. The platform has fewer services, a smaller community, and less third-party tooling. IBM Cloud makes sense specifically for enterprises with existing IBM infrastructure, regulated industry requirements, or hybrid cloud mandates.

Alibaba Cloud, Tencent Cloud, and Huawei Cloud

The Chinese cloud market is a parallel universe. Alibaba Cloud, Tencent Cloud, and Huawei Cloud control 61% of the Chinese market combined, and Alibaba Cloud is the 4th largest cloud provider globally by revenue. But geopolitical barriers make these platforms largely irrelevant for Western companies, and vice versa.

Market Dynamics

ProviderChina Market ShareGlobal RankKey Strength
Alibaba Cloud33%#4E-commerce, Qwen AI models
Huawei Cloud15%#6Telecom, government, Ascend AI chips
Tencent Cloud13%#7Gaming, social, WeChat ecosystem

Alibaba's Qwen models are competitive with Western LLMs and are open-source, making them popular for on-premises deployments globally. Huawei has developed its own Ascend AI chips to reduce dependency on NVIDIA after US export restrictions. Tencent Cloud dominates gaming infrastructure in Asia.

Geopolitical Barriers

US export controls on advanced semiconductors, data localization requirements in China, and mutual distrust between governments make cross-border cloud adoption difficult. Western companies operating in China typically use a Chinese cloud provider for in-country workloads and a Western provider for everything else. Chinese companies face similar constraints expanding globally.

If you need cloud infrastructure in mainland China, Alibaba Cloud or Tencent Cloud are your best options. AWS and Azure have China regions, but they are operated by local partners (Sinnet and 21Vianet respectively) with separate accounts, billing, and service availability.

Developer-Friendly Alternatives

Not every workload needs a hyperscaler. For startups, indie developers, and cost-conscious teams, a growing ecosystem of developer-friendly cloud providers offers simpler pricing, better developer experience, and significant cost savings.

DigitalOcean

DigitalOcean generated $901 million in revenue in the trailing twelve months, proving there is a massive market for "cloud without the complexity." The platform is beloved by developers for its clean UI, predictable pricing, and excellent documentation.

The big news is the Agentic Inference Cloud, DigitalOcean's play for the AI workload market. It provides managed GPU instances and inference endpoints optimized for deploying AI agents, targeting the growing market of developers building AI-powered applications who do not want to navigate AWS Bedrock's complexity.

  • Droplets (VMs) start at $4/month with predictable pricing
  • App Platform for PaaS deployments (similar to Heroku)
  • Managed Kubernetes with free control plane
  • Managed databases (PostgreSQL, MySQL, Redis, MongoDB)
  • Flat-rate bandwidth included with every Droplet

Linode (Akamai Cloud)

Akamai acquired Linode in 2022 and is building a distributed cloud platform that combines Akamai's CDN edge network (4,200+ locations) with Linode's compute infrastructure. The result is a cloud platform where your compute runs close to your users by default.

  • Compute instances in 30+ global locations
  • Integrated CDN and DDoS protection from Akamai's network
  • Managed Kubernetes (LKE) with free control plane
  • Competitive pricing: 4GB instance at $24/month vs $35+ on AWS

Vultr

Vultr operates in 33 regions worldwide and positions itself as the high-performance alternative to hyperscalers at 50-90% lower cost. Bare-metal servers, cloud compute, managed Kubernetes, and GPU instances are all available with simple, transparent pricing.

  • Cloud compute from $2.50/month
  • Bare-metal servers for performance-sensitive workloads
  • NVIDIA GPU instances for AI/ML at fraction of hyperscaler cost
  • 33 locations across 6 continents
  • Free bandwidth allowance included with every instance
When to choose alternatives over hyperscalers: If your workload is straightforward (web apps, APIs, databases), you do not need 400+ managed services, and you want predictable pricing without surprise bills, these platforms deliver excellent value. You lose access to niche managed services but gain simplicity and cost savings.

European Sovereignty Clouds

Data sovereignty is no longer a niche concern. GDPR enforcement is intensifying, the EU Data Act is taking effect, and European governments are actively funding alternatives to American cloud providers. The sovereign cloud market is growing faster than the overall cloud market in Europe.

Hetzner

Hetzner is the open secret of the European tech scene. Based in Germany, Hetzner offers unbeatable pricing for dedicated servers and cloud instances. A dedicated server with 64GB RAM, 2x 512GB NVMe SSDs, and unlimited bandwidth costs around EUR 40/month. The equivalent on AWS would cost 5-10x more.

Hetzner's cloud instances (CX series) start at EUR 3.29/month for 2 vCPUs and 4GB RAM. The trade-off is fewer managed services (no managed Kubernetes, no serverless functions, limited managed databases), but for teams comfortable managing their own infrastructure, the cost savings are dramatic.

OVHcloud

OVHcloud is the largest European-born cloud provider, headquartered in France. The company won a landmark EUR 180 million contract to host the European Central Bank's digital euro infrastructure, a major validation of European sovereign cloud capabilities.

  • Data centers exclusively in Europe (France, Germany, UK, Poland)
  • GDPR-compliant by design with no US data transfer
  • Bare-metal, public cloud, private cloud, and hosted private cloud
  • Competitive pricing with predictable billing
  • S3-compatible object storage with included egress bandwidth

The Sovereign Cloud Trend

Multiple European governments are mandating that sensitive data stay on European-owned infrastructure. France's "Cloud de Confiance" certification, Germany's Gaia-X initiative, and the EU's European Alliance for Industrial Data, Edge and Cloud are all pushing toward European cloud sovereignty. AWS, Azure, and Google have responded with "sovereign cloud" offerings (dedicated regions operated by European partners), but purists argue these still ultimately answer to American parent companies subject to the US CLOUD Act.

Cloudflare - The Edge-First Cloud

Cloudflare is building the cloud platform that the hyperscalers should have built: edge-first, zero egress fees, and developer-friendly by default. What started as a CDN and DDoS protection service has evolved into a full-stack cloud platform with compute (Workers), storage (R2, KV, Durable Objects), databases (D1), AI inference (Workers AI), and queues.

AI workloads now account for 25-32% of Cloudflare's revenue, driven by companies using Workers AI for inference at the edge and R2 for storing AI training data without egress fees.

Key Services

Workers - Serverless functions that run on Cloudflare's edge network in 300+ cities worldwide. Cold starts are measured in single-digit milliseconds (compared to hundreds of milliseconds for Lambda). Workers use the V8 isolate model rather than containers, which means near-instant startup but a different programming model than traditional serverless.

R2 - S3-compatible object storage with zero egress fees. This is Cloudflare's most disruptive product. Store data in R2 and serve it globally without paying a cent for bandwidth. At 100TB of monthly egress, R2 saves $9,000/month compared to S3. We cover this in detail in the storage and egress section.

D1 - SQLite-based serverless database that runs at the edge. Not a replacement for PostgreSQL or MySQL for complex workloads, but excellent for read-heavy applications that benefit from global distribution.

Workers AI - Run AI inference on Cloudflare's edge network using open-source models (Llama, Mistral, Stable Diffusion). No GPU management, no cold starts, pay per request. Ideal for applications that need low-latency inference close to users.

Cloudflare's edge: The zero-egress model is not just a pricing gimmick. It fundamentally changes architecture decisions. When egress is free, you can serve assets directly from object storage without a CDN layer. You can replicate data across regions without worrying about transfer costs. You can build multi-region architectures that would be prohibitively expensive on hyperscalers.

Compute Pricing Comparison

One of the most surprising facts about cloud pricing in 2026: the major providers have largely converged on compute costs. A general-purpose 4-vCPU, 16GB RAM instance costs roughly the same everywhere. The real differences show up in commitment discounts, ARM processor availability, and the hidden costs (egress, cross-AZ traffic, managed service fees) that surround the compute.

On-Demand Pricing (4 vCPU, 16GB RAM, US Region)

ProviderInstance TypeHourlyMonthly (730h)Notes
AWSm7i.xlarge$0.2016$147x86 Intel
AWSm7g.xlarge$0.1632$119Graviton4 ARM
AzureD4s v5$0.192$140x86 Intel
AzureD4ps v5$0.154$112Ampere ARM
GCPn2-standard-4$0.194$142x86 Intel
GCPt2a-standard-4$0.155$113Axion ARM
OCIVM.Standard3.Flex$0.128$93x86 Intel
OCIVM.Standard.A2.Flex$0.10$73Ampere ARM
DigitalOceanPremium 4vCPU/16GB$0.131$96Intel/AMD
VultrCloud Compute 4vCPU/16GB$0.107$78AMD
HetznerCX41$0.024$17.49Shared vCPU

The hyperscalers (AWS, Azure, GCP) are within 5% of each other on x86 pricing. The real savings come from three strategies: ARM processors, commitment models, and alternative providers.

ARM Processors: Graviton4 vs Ampere vs Axion

ARM-based instances are 19-25% cheaper than x86 equivalents across all three hyperscalers, and they deliver equal or better performance for most workloads. The ARM ecosystem has matured to the point where compatibility is rarely an issue.

ProcessorProviderSavings vs x86PerformanceCompatibility
Graviton4AWS19-20%30% better price-perfExcellent (Node, Python, Go, Java 17+, .NET 6+)
Ampere AltraAzure, OCI20-22%Comparable to x86Excellent (same as Graviton)
AxionGCP20-25%Google's custom ARMExcellent (same as Graviton)

Commitment Models

Every hyperscaler offers discounts for committing to usage. The models differ significantly:

ModelProvider1-Year Savings3-Year SavingsFlexibility
Compute Savings PlansAWS30-37%50-60%Any instance family, region, OS
Reserved InstancesAWS30-40%55-65%Locked to instance type + region
Reserved VM InstancesAzure30-40%55-65%Locked to VM size + region
Committed Use DiscountsGCP28-37%52-60%Locked to machine type + region
Sustained Use DiscountsGCPUp to 30%N/AAutomatic, no commitment

GCP's Sustained Use Discounts are unique: you get automatic discounts (up to 30%) just for running instances consistently, with no upfront commitment. This makes GCP the most forgiving platform for teams that cannot accurately predict their usage.

# Terraform: Deploy a Graviton4 instance on AWS
resource "aws_instance" "app" {
  ami           = "ami-0abcdef1234567890" # Amazon Linux 2023 ARM64
  instance_type = "m7g.xlarge"            # Graviton4, 4 vCPU, 16GB

  tags = {
    Name = "app-server"
    Arch = "arm64"
  }
}

# Terraform: Deploy an Ampere ARM instance on Azure
resource "azurerm_linux_virtual_machine" "app" {
  name                = "app-server"
  resource_group_name = azurerm_resource_group.main.name
  location            = "eastus"
  size                = "Standard_D4ps_v5" # Ampere ARM, 4 vCPU, 16GB

  source_image_reference {
    publisher = "Canonical"
    offer     = "ubuntu-24_04-lts"
    sku       = "server-arm64"
    version   = "latest"
  }
}

Storage and Egress - The Hidden Cost

Storage pricing looks similar across providers until you factor in egress (outbound data transfer). Egress fees are the cloud's most controversial pricing mechanism and the single biggest reason companies consider multi-cloud or cloud repatriation. This section contains the most important pricing tables in this entire guide.

Object Storage Pricing

ProviderServiceStorage (per GB/mo)PUT (per 1K)GET (per 1K)Egress (per GB)
AWSS3 Standard$0.023$0.005$0.0004$0.09
AzureBlob Hot$0.018$0.0065$0.0004$0.087
GCPCloud Storage Standard$0.020$0.005$0.0004$0.12
CloudflareR2$0.015$0.0036Free$0.00
OCIObject Storage$0.0255FreeFree$0.0085*
BackblazeB2$0.006Free$0.004/10K$0.01

*OCI: First 10TB/month egress is free. Price shown is for usage beyond 10TB.

Egress Fee Comparison - The Real Cost

This is where the differences become dramatic. Here is what you pay to transfer data out of each provider:

ProviderFree TierPrice per GBCost at 10TB/moCost at 100TB/mo
Cloudflare R2Unlimited$0.00$0$0
OCI10TB/mo$0.0085$0$765
Backblaze B23x storage$0.01$100$1,000
Azure100GB$0.087$870$8,700
AWS100GB$0.09$900$9,000
GCP200GB$0.12$1,200$12,000
The $9,000 question: At 100TB of monthly egress, AWS charges $9,000 while Cloudflare R2 charges $0. That is $108,000 per year in egress fees alone. If your workload is egress-heavy (media streaming, CDN origin, API responses, data distribution), the choice of storage provider can be the single largest line item on your cloud bill.

This is why Cloudflare R2 has been so disruptive. It is not just cheaper storage. It is a fundamentally different economic model that eliminates the "data gravity" lock-in that hyperscalers depend on. Once your data is in S3, the cost of moving it out (or serving it to users) creates a powerful incentive to keep everything on AWS. R2 removes that incentive entirely.

Storage Tiering Strategies

Every provider offers cheaper storage tiers for infrequently accessed data:

TierAWSAzureGCPUse Case
Hot/Standard$0.023/GB$0.018/GB$0.020/GBFrequently accessed data
Infrequent Access$0.0125/GB$0.010/GB$0.010/GBMonthly access patterns
Archive$0.004/GB$0.002/GB$0.004/GBYearly access, compliance
Deep Archive$0.00099/GB$0.00099/GB$0.0012/GBRarely accessed, 12h retrieval

AWS S3 Intelligent-Tiering automatically moves objects between tiers based on access patterns, with no retrieval fees. If you are unsure which tier to use, Intelligent-Tiering is the safe default. See our AWS cost optimization guide for a detailed breakdown.

Serverless Comparison

Serverless computing eliminates server management entirely. You deploy code, the platform runs it, and you pay only for execution time. But the four major serverless platforms differ significantly in pricing, cold start behavior, execution limits, and programming models.

Serverless Pricing Comparison

FeatureAWS LambdaAzure FunctionsGCP Cloud RunCloudflare Workers
Free Tier1M requests + 400K GB-s/mo1M requests + 400K GB-s/mo2M requests + 180K vCPU-s/mo100K requests/day
Price per 1M requests$0.20$0.20$0.40$0.30
Price per GB-s$0.0000166$0.000016$0.0000240Included
Max Execution Time15 min10 min (Consumption)60 min30s (free) / 15 min (paid)
Max Memory10 GB1.5 GB (Consumption)32 GB128 MB
Cold Start (typical)100-500ms200-1000ms0-300ms (min instances)0-5ms
Container SupportYes (10GB images)YesYes (native)No (V8 isolates)
LanguagesNode, Python, Java, Go, .NET, Ruby, RustNode, Python, Java, C#, PowerShellAny (container)JS/TS, Python, Rust (WASM)

Cold Start Reality

Cold starts remain the biggest serverless pain point. Here is what to expect in practice:

  • Cloudflare Workers: 0-5ms. V8 isolates start almost instantly. This is the clear winner for latency-sensitive workloads.
  • GCP Cloud Run: 0ms with minimum instances configured (you pay for idle time). Without min instances, 100-300ms for container startup.
  • AWS Lambda: 100-500ms for most runtimes. SnapStart reduces Java cold starts to under 200ms. Provisioned concurrency eliminates cold starts entirely (at a cost).
  • Azure Functions: 200-1000ms on the Consumption plan. Premium plan with pre-warmed instances reduces this to near-zero.
Cloud Run's advantage: Cloud Run is the most flexible serverless platform because it runs standard containers. Any language, any framework, any binary. You are not locked into a specific runtime or SDK. If you can put it in a Docker container, Cloud Run can run it. This makes it the easiest migration path from traditional server-based architectures to serverless.
# Deploy a container to Cloud Run in one command
gcloud run deploy my-service \
  --image gcr.io/my-project/my-app:latest \
  --region us-central1 \
  --allow-unauthenticated \
  --min-instances 1 \
  --max-instances 100 \
  --memory 512Mi \
  --cpu 1

Kubernetes Comparison

Kubernetes has become the default container orchestration platform, but the managed Kubernetes offerings from each cloud provider differ significantly in pricing, features, and operational overhead.

FeatureAWS EKSAzure AKSGCP GKEOCI OKE
Control Plane Cost$73/moFreeFree (Autopilot) / $73 (Standard)Free
Auto-scalingKarpenterKEDA + Cluster AutoscalerAutopilot (fully managed)Cluster Autoscaler
Max Nodes5,0005,00015,0001,000
GPU SupportYes (NVIDIA, Trainium)Yes (NVIDIA)Yes (NVIDIA, TPU)Yes (NVIDIA)
Service MeshApp Mesh / IstioIstio-basedAnthos Service MeshIstio
Multi-clusterManual / RancherAzure ArcGKE Enterprise (fleet)Manual
Serverless PodsFargateVirtual Nodes (ACI)AutopilotVirtual Nodes
Uptime SLA99.95%99.95%99.95%99.95%

The Cost Difference is Real

EKS charges $73/month per cluster just for the control plane. If you run 5 clusters (dev, staging, prod, data, ML), that is $365/month before a single pod runs. AKS, GKE Autopilot, and OKE all offer free control planes. Over a year, the EKS control plane tax on 5 clusters is $4,380.

That said, EKS has the largest ecosystem of third-party tools, the most documentation, and the deepest integration with AWS services (ALB Ingress Controller, IAM Roles for Service Accounts, EBS CSI driver). If you are already on AWS, the $73/month is a minor cost relative to your total compute spend.

GKE Autopilot - The Future of Managed Kubernetes

GKE Autopilot is the most opinionated and the most hands-off managed Kubernetes offering. Google manages the nodes, the node pools, the scaling, and the security hardening. You just deploy pods. You pay per pod resource request rather than per node, which means no wasted capacity from over-provisioned nodes.

The trade-off is less control. You cannot SSH into nodes, run privileged containers, or use DaemonSets (Autopilot manages system-level concerns). For most application workloads, these restrictions do not matter. For specialized workloads (GPU training, custom kernel modules, host networking), Standard mode gives you full control.

# Create a GKE Autopilot cluster
gcloud container clusters create-auto my-cluster \
  --region us-central1 \
  --release-channel regular

# Create an EKS cluster with eksctl
eksctl create cluster \
  --name my-cluster \
  --region us-east-1 \
  --nodegroup-name workers \
  --node-type m7g.large \
  --nodes 3 \
  --managed

Database Comparison

Managed databases are one of the strongest reasons to use a cloud provider. The operational burden of running production databases (backups, patching, failover, scaling, monitoring) is significant, and managed services handle all of it. Here is how the major offerings compare.

Relational Database Comparison

FeatureAWS RDS/AuroraAzure SQLGCP Cloud SQLGCP AlloyDBOCI Autonomous DB
EnginesMySQL, PostgreSQL, Oracle, SQL Server, MariaDBSQL Server, MySQL, PostgreSQLMySQL, PostgreSQL, SQL ServerPostgreSQLOracle
ServerlessAurora Serverless v2Azure SQL ServerlessNoNoYes (always)
Auto-scalingAurora: read replicas + serverlessElastic pools, HyperscaleRead replicasColumnar engine auto-scalesFully automatic
Max Storage128 TB (Aurora)100 TB (Hyperscale)64 TB64 TBUnlimited
Multi-regionAurora Global DatabaseActive geo-replicationCross-region replicasCross-region replicasAutonomous Data Guard
Starting Price~$29/mo (db.t4g.micro)~$5/mo (Basic)~$7/mo (db-f1-micro)~$100/mo (2 vCPU)Free tier available

Aurora vs AlloyDB vs Autonomous DB

Aurora is the default choice for most AWS workloads. It is MySQL and PostgreSQL compatible, scales to 128TB, supports serverless auto-scaling, and offers global databases for multi-region deployments. Aurora I/O-Optimized eliminates per-I/O charges for write-heavy workloads.

AlloyDB is Google's PostgreSQL-compatible database that uses a columnar engine for analytics queries. It delivers up to 4x faster analytical queries and 4x faster transactional throughput compared to standard PostgreSQL. If you run mixed OLTP/OLAP workloads on PostgreSQL, AlloyDB is worth evaluating.

Oracle Autonomous Database is fully self-managing: it patches itself, tunes itself, scales itself, and backs itself up. The Always Free tier includes one Autonomous Database instance with 20GB storage. For Oracle Database workloads, the multicloud deployment options (Oracle DB@AWS, @Azure, @GCP) let you run Oracle's engine inside your preferred cloud.

Cost tip: Azure SQL Database Basic tier starts at $5/month, making it the cheapest managed relational database from a hyperscaler. For development environments and low-traffic applications, this is hard to beat. GCP's db-f1-micro at $7/month is the next cheapest option.

AI/ML Platform Comparison

AI/ML is the fastest-growing segment of cloud spending, and every provider is racing to offer the best platform for training, fine-tuning, and deploying models. The three hyperscaler platforms (Bedrock, Azure OpenAI, Vertex AI) take fundamentally different approaches.

Platform Comparison

FeatureAWS BedrockAzure OpenAIGCP Vertex AI
ApproachMulti-model marketplaceOpenAI models + partnersGoogle models + open source
Top ModelsClaude, Llama, Mistral, NovaGPT-5.5, GPT-4o, Phi-4Gemini 2.5, Gemma 3
Custom TrainingFine-tuning, continued pre-trainingFine-tuningFull training + fine-tuning
Custom SiliconTrainium2, Inferentia2Maia 100 (preview)TPU v5p, v6e
RAG SupportKnowledge Bases for BedrockAzure AI Search + OpenAIVertex AI Search
Agent FrameworkBedrock AgentsAzure AI Agent ServiceVertex AI Agent Builder
GuardrailsBedrock GuardrailsContent Safety APIVertex AI Safety

Token Pricing Comparison (per 1M tokens)

Model pricing changes frequently, but here is a snapshot of the major models available across platforms as of May 2026:

ModelPlatformInput (per 1M)Output (per 1M)Best For
GPT-5.5Azure OpenAI$5.00$15.00Complex reasoning, code generation
Claude Opus 4.7AWS Bedrock$15.00$75.00Long-form analysis, nuanced writing
Gemini 2.5 ProVertex AI$1.25$5.00Multimodal, long context (1M tokens)
Claude Sonnet 4AWS Bedrock$3.00$15.00Balanced performance and cost
GPT-4oAzure OpenAI$2.50$10.00General purpose, multimodal
Llama 4 MaverickAWS Bedrock$0.20$0.60Cost-effective open source
Amazon Nova MicroAWS Bedrock$0.035$0.14Ultra-low cost, simple tasks
DeepSeek V3Various$0.27$1.10Cost-effective reasoning
Gemini 2.5 FlashVertex AI$0.15$0.60Fast, cheap, good enough
Pricing changes fast. Model pricing drops 30-50% every 6 months as competition intensifies. The prices above are accurate as of May 2026 but will likely be lower by the time you read this. Always check the provider's current pricing page before making decisions.

Which Platform to Choose

  • AWS Bedrock if you want model flexibility. Access Claude, Llama, Mistral, and Nova through one API. Switch models without changing code. Best for teams that want to avoid model vendor lock-in.
  • Azure OpenAI if you need GPT models with enterprise compliance. The only way to use GPT-5.5 with SOC 2, HIPAA, and FedRAMP compliance. Best for regulated industries already on Azure.
  • Vertex AI if you want the best price-performance. Gemini 2.5 Pro offers GPT-5.5-class performance at 75% lower cost. TPUs provide the cheapest training infrastructure. Best for cost-conscious teams and heavy ML workloads.

Migration Strategies

The 6 Rs framework remains the standard approach for cloud migration planning. Each strategy has different cost, timeline, and risk profiles. The right choice depends on the application, not a blanket policy.

StrategyDescriptionCostTimelineRiskBest For
Rehost (Lift and Shift)Move as-is to cloud VMsLow1-3 monthsLowQuick wins, legacy apps
Replatform (Lift and Optimize)Minor changes (e.g., swap to managed DB)Low-Medium2-6 monthsLow-MediumDatabase migrations, container adoption
Refactor (Re-architect)Rebuild for cloud-nativeHigh6-18 monthsMedium-HighCore business apps, scaling needs
RepurchaseSwitch to SaaS (e.g., on-prem CRM to Salesforce)Medium1-6 monthsMediumCommodity software
RetainKeep on-premisesNoneN/ANoneRegulated data, low-latency needs
RetireDecommissionNegative (saves money)1-2 monthsLowUnused or redundant apps

Migration Cost Estimates

Real-world migration costs vary enormously, but here are typical ranges based on application complexity:

Application TypeRehost CostReplatform CostRefactor Cost
Simple web app (1-2 servers)$5K-$15K$10K-$30K$50K-$150K
Mid-size app (5-10 servers, DB)$20K-$50K$50K-$100K$200K-$500K
Enterprise app (50+ servers, complex)$100K-$300K$200K-$500K$500K-$2M
Data warehouse (petabyte-scale)$50K-$200K$100K-$500K$300K-$1M

These costs include engineering time, tooling, testing, and cutover. They do not include the ongoing cloud infrastructure costs. The most common mistake is underestimating the refactor timeline. A "6-month refactor" almost always takes 12-18 months in practice.

Start with rehost, optimize later. The fastest path to cloud value is rehosting everything first, then optimizing the workloads that matter most. Trying to refactor during migration doubles the timeline and risk. Get to cloud first, then iterate. This is the approach AWS, Azure, and GCP all recommend in their migration frameworks.
# Terraform: Multi-cloud provider configuration
# Use this pattern to manage resources across providers

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 4.0"
    }
    google = {
      source  = "hashicorp/google"
      version = "~> 6.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

provider "azurerm" {
  features {}
  subscription_id = var.azure_subscription_id
}

provider "google" {
  project = var.gcp_project_id
  region  = "us-central1"
}

For a complete guide to managing infrastructure across multiple clouds, see our Terraform multi-cloud guide.

Multi-Cloud Patterns

93% of enterprises use multiple cloud providers. But most do it by accident (acquisitions, team preferences, shadow IT) rather than by design. Intentional multi-cloud is expensive and complex. Here is when it makes sense and when it does not.

When Multi-Cloud Makes Sense

  • Best-of-breed services. Run your AI workloads on GCP (Vertex AI, TPUs), your enterprise apps on Azure (Active Directory, Microsoft 365 integration), and your serverless APIs on AWS (Lambda, DynamoDB). Each cloud has genuine strengths that the others cannot match.
  • Regulatory compliance. Some regulations require data to be stored in specific jurisdictions or on specific provider types. European data sovereignty requirements may mandate a European cloud provider for certain workloads while you use a hyperscaler for others.
  • Vendor negotiation leverage. Running workloads on two providers gives you credible leverage in pricing negotiations. "We can move this workload to Azure" is more convincing when you already have Azure infrastructure running.
  • Disaster recovery. For truly critical workloads where a single provider outage is unacceptable, multi-cloud DR provides the highest level of resilience. This is expensive and complex but necessary for some financial services and healthcare workloads.
  • M&A integration. When you acquire a company running on a different cloud, forcing an immediate migration is risky and expensive. Multi-cloud lets you integrate gradually.

When Multi-Cloud Does Not Make Sense

  • Avoiding lock-in for its own sake. The cost of running on two clouds (duplicate tooling, duplicate expertise, duplicate networking) almost always exceeds the cost of any theoretical lock-in. Use cloud-agnostic tools (Terraform, Kubernetes, PostgreSQL) within a single cloud instead.
  • Startups and small teams. You do not have the engineering bandwidth to operate two clouds well. Pick one and go deep. You can always migrate later if needed.
  • Identical workloads on multiple clouds. Running the same application on AWS and Azure "for redundancy" doubles your operational cost without proportional benefit. Use multi-region within a single cloud instead.
The hidden cost of multi-cloud: Cross-cloud data transfer is expensive ($0.02-0.09/GB depending on the path), and every team member needs expertise in multiple platforms. A 10-person engineering team running on two clouds effectively has 5 people per cloud. That is rarely enough to operate either one well.

Cloud Repatriation

Cloud repatriation, moving workloads from public cloud back to on-premises or colocation, is no longer a fringe movement. 86% of CIOs report planning some form of cloud repatriation in 2026. The economics have shifted: cloud costs have increased, on-premises hardware has gotten cheaper and more efficient, and many companies have realized that not every workload benefits from cloud elasticity.

The 37signals Case Study

The most famous repatriation story is 37signals (makers of Basecamp and HEY). They moved off AWS in 2023-2024 and documented the results publicly. The numbers are compelling: $7 million saved over 5 years by purchasing their own servers and colocating them. Their cloud bill was $3.2 million/year. Their on-premises infrastructure costs (hardware amortized over 5 years + colocation + staff) came to $1.8 million/year.

But 37signals is a specific case: predictable traffic, no global distribution requirements, experienced ops team, and workloads that do not benefit from cloud-managed services. Not every company fits this profile.

What Stays in the Cloud

  • Bursty workloads - If your traffic spikes 10x during events, buying hardware for peak capacity is wasteful
  • AI/ML training - GPU clusters are expensive to own and depreciate quickly as new chips launch
  • Global distribution - Serving users on 6 continents requires cloud regions, not a single data center
  • Managed services - DynamoDB, BigQuery, Aurora Serverless, and similar services have no on-premises equivalent
  • Startups - You need to move fast and cannot afford a 6-month hardware procurement cycle

What Goes Back On-Premises

  • Steady-state compute - Servers running at 60-80% utilization 24/7 are cheaper to own
  • Large data stores - Petabytes of data with heavy egress are dramatically cheaper on-premises
  • Compliance-sensitive workloads - Some regulations are easier to satisfy with physical control
  • Predictable databases - A PostgreSQL server that has not changed size in 2 years does not need cloud elasticity
The hybrid answer: Most companies will end up with a hybrid approach. Repatriate the commodity compute and large data stores where cloud adds no value. Keep the cloud for elastic workloads, managed AI/ML services, global distribution, and rapid experimentation. The goal is not "cloud vs. on-prem" but "right workload, right platform."

How to Choose Your Cloud Provider

After 8,000+ words of comparisons, here is the decision framework. Your choice depends on three factors: company size, primary workload type, and budget constraints.

By Company Size

Company SizeRecommendedWhy
Solo developer / Side projectCloudflare (Workers + R2 + D1) or OCI Free TierGenerous free tiers, zero egress, simple pricing
Startup (1-20 people)AWS or GCPAWS for breadth and hiring pool. GCP for AI-first startups and better free credits ($300K via startup programs)
Mid-size (20-500 people)AWS, Azure, or GCP based on existing stackIf Microsoft shop, Azure. If data/ML heavy, GCP. Default to AWS for broadest ecosystem
Enterprise (500+ people)Multi-cloud (primary + secondary)Negotiation leverage, best-of-breed, regulatory compliance across regions

By Workload Type

WorkloadBest ProviderWhy
AI/ML TrainingGCP (TPUs) or OCI (cheap GPUs)TPUs for Google models, OCI 58% cheaper for NVIDIA GPUs
AI InferenceAWS Bedrock or Cloudflare Workers AIBedrock for model variety, Workers AI for edge latency
Web ApplicationsAWS, Cloudflare, or DigitalOceanAWS for complex apps, Cloudflare for edge-first, DO for simplicity
Data AnalyticsGCP (BigQuery)BigQuery is the best serverless data warehouse, period
Enterprise SaaSAzureActive Directory integration, compliance certifications, EA discounts
KubernetesGCP (GKE) or AWS (EKS)GKE for best K8s experience, EKS for AWS ecosystem integration
ServerlessAWS (Lambda) or Cloudflare (Workers)Lambda for ecosystem, Workers for cold start performance
Oracle DatabaseOCI or Oracle DB@AWS/Azure/GCPBest Oracle performance and pricing, multicloud flexibility
Regulated IndustriesAzure or IBM CloudAzure for FedRAMP/HIPAA, IBM for mainframe hybrid
Media/CDNCloudflare or Akamai/LinodeZero egress (Cloudflare) or integrated CDN (Akamai)

By Budget

Monthly BudgetRecommendedStrategy
$0 (free tier)OCI Always Free or Cloudflare FreeOCI: 4 ARM cores + 24GB RAM free forever. Cloudflare: 100K Workers requests/day free
$5-$50Hetzner, Vultr, or DigitalOceanBest price-performance for simple workloads
$50-$500AWS, GCP, or DigitalOceanStartup credits available. Use ARM instances and serverless to stretch budget
$500-$5,000AWS or GCP with Savings PlansCommit to 1-year plans for 30-40% savings. Use Graviton/Axion ARM instances
$5,000-$50,000Negotiate with your providerAt this spend level, you qualify for enterprise discounts. Play providers against each other
$50,000+Multi-cloud + repatriation analysisEvaluate which workloads should stay in cloud vs. move to colo. Negotiate aggressively

The Decision Checklist

Before committing to a provider, answer these questions:

  1. What is your team's existing expertise? The cloud your team already knows is usually the right choice. Retraining costs are real.
  2. What managed services do you actually need? If the answer is "VMs and a database," you do not need a hyperscaler. If the answer includes DynamoDB, BigQuery, or Bedrock, you need the specific provider that offers it.
  3. How much data egress do you have? If the answer is "a lot," Cloudflare R2 or OCI should be in the conversation.
  4. What are your compliance requirements? FedRAMP, HIPAA, SOC 2, GDPR, and data sovereignty requirements narrow your options significantly.
  5. What is your growth trajectory? If you expect 10x growth in 2 years, cloud elasticity is valuable. If your workload is stable, on-premises may be cheaper.
The bottom line: There is no single "best" cloud provider. AWS has the broadest catalog. Azure has the best enterprise integration. GCP has the best AI/ML platform. OCI has the best database and egress pricing. Cloudflare has the best edge platform. The right choice depends on your specific workload, team, and budget. Use this guide's comparison tables to make a data-driven decision rather than defaulting to whatever your last company used.