Splitting central from regional without duplicating everything.
Multi-region isn't 'one stack, copy-pasted N times.' Three layers, each with a clear job, and a single rule about who depends on whom.
The naive view of multi-region is to copy your stack N times and load-balance. It works for stateless apps with no shared identity. It fails the moment you have users that exist across regions, billing that has to be unique, or a brand catalog that's the same everywhere.
The split that holds up is three layers, not one stack repeated.
The three layers
Global, exists once, no concept of region. Things like the DNS zone, container registries, the OIDC provider for CI deploys, IAM roles shared across the fleet, the wildcard TLS cert for the main domain. These resources have no regional flavor; they're the connective tissue of the account itself.
Central, exists once, in the primary region. The control plane. Authentication, billing, registry data that needs a single source of truth. Anything where "two of these is a contradiction" lives here. Users, sessions, the subscription state, the registry of which regions a tenant lives in.
Regional, exists per region, replicated by composition. The data plane. App servers, regional database, storage bucket, queues, CDN distribution. Each region has its own version of all of these, isolated from the others. A user's content lives in their region; it doesn't fan out.
The rule that makes this work: regional code never reaches across to another region's regional resources. It can talk up to central. Central can talk down to any region. But region-A's app never reads region-B's database directly.
Why this beats "copy-paste the stack"
Start with the cost. If you make every resource regional, you provision N copies of identity and billing, and the moment a user signs up in region A and pays in region B, you have a coordination problem you didn't ask for. Two billing tables means two truths about what they owe. Pull authentication out into central, the question disappears: there's one place to ask "who is this user," and it lives in one region with one schema.
Then there's data residency. Some regions have legal requirements about where customer data physically sits. Regional buckets and regional databases let you say "EU customer data lives in EU" and prove it with the IAM and the topology. If everything is central, that's harder, you'd have to partition rows inside one DB by region tag and trust the predicate. Compliance reviewers don't love trust-the-predicate.
And blast radius. A central outage stops cross-region operations, sign-up, payment, brand creation. That's bad. A regional outage stops only that region. That's worse for that region's users but doesn't take the rest of the world with it. The split bounds the worst case.
Concretely, how the regional database works
Same database name in every regional cluster, different physical clusters, different data. Region-A's app_data cluster has region-A's tables; region-B's app_data cluster has region-B's. The application code connects via a DATABASE_URL_REGIONAL environment variable that resolves at boot, the binary is identical across regions, only the env points somewhere else.
For a transitional period, the regional database might live as a logical database inside the central cluster. The app doesn't know. Same name, same schema, same connection string shape. When traffic justifies a dedicated regional cluster, you provision it, change the env var, and the application binary doesn't change.
That's the leverage. The split lives in the topology and the IAM, not in the application code. Splitting later doesn't require a refactor.
What this costs
You write Terraform compositions that instantiate the same regional module multiple times, each pointing at a different AWS provider alias. The module itself doesn't care which region it's in, region is an input variable. The composition decides.
You think harder up front about what's central and what's regional. Some calls are obvious (auth = central, media = regional). Some are not (analytics? caching? tagging?). The default for ambiguous cases is regional, moving from regional to central later is harder than the reverse, because regional means data has region affinity, and unifying region-affined data into a central table is destructive in a way that regionalizing central data is not.
You accept that central is a single point of failure for cross-region operations. You either run it at high availability (multi-AZ within the primary region, automatic failover) or you accept that some product flows pause briefly during central incidents.
What it gives you
A multi-region deployment that adds regions cheaply. Adding eu is a Terraform composition entry, an SSM parameter set, a CDN distribution per the regional pattern. Not a fork of the entire stack with all its variables to duplicate.
A clear answer to "where does this data live." If it's central, anywhere. If it's regional, in the customer's region. If it's global, it's the same everywhere.
A control plane that's not gated by regional load. Identity stays fast even if a regional database is under pressure. Billing keeps reconciling.
When this is overkill
Single-region products. If you're not multi-region, you have one regional. Don't pre-build the central/regional split for a future you might never need. The split is worth it when your topology is forcing the question. Until then, monolith. The migration is finite when the question lands; the maintenance cost of a premature split is forever.
What survives the migration
When you do split, whether you're going from one region to two, or from a monolith to layers, the move that matters is naming the boundary. "This data is regional. This data is central. This data is global." Once that's named, the rest of the work is plumbing.
The mistake I see most often is teams who multi-region the topology but not the schema. They run two regional databases that contain all the data, including the global parts. Then they spend a year reconciling between them. The split has to land at the data layer, not just the deployment layer.
Pick the boundary, encode it in the schema, and let the topology follow.