VPC Network Design Patterns
Design GCP VPC networks with Shared VPC, peering, and Private Google Access patterns.
Prerequisites
- GCP project with networking permissions
- Understanding of CIDR notation and IP addressing
- Familiarity with GCP resource hierarchy
VPC Fundamentals in GCP
A Virtual Private Cloud (VPC) in GCP is a global resource, unlike AWS where VPCs are regional. A single GCP VPC can span every region simultaneously, with subnets being the regional component. This architecture means inter-region communication within the same VPC uses Google's private backbone by default, without needing peering or transit gateways between regions. This is one of the most significant architectural differences between GCP and other major cloud providers, and it profoundly affects how you design multi-region applications.
GCP offers two VPC modes: auto mode and custom mode. Auto mode creates one subnet per region with predefined CIDR ranges from the 10.128.0.0/9 block. Custom mode gives you full control over subnet creation and IP ranges. For any environment beyond casual experimentation, custom mode is the only appropriate choice.
Every VPC comes with implied firewall rules: a default deny-all ingress rule (priority 65535) and a default allow-all egress rule (priority 65535). Auto mode VPCs also include additional implied rules that allow internal communication and SSH/RDP from Google's IAP ranges. Custom mode VPCs have only the implied deny/allow rules, giving you a clean slate for firewall configuration.
Always Use Custom Mode for Production
Auto mode VPCs use overlapping CIDR ranges that conflict when you try to set up VPC peering. They also create subnets in every region, which increases your attack surface and makes network management unwieldy. Additionally, auto mode subnet ranges are well-known and predictable, which creates collisions when connecting to other organizations. Always start with custom mode and explicitly create subnets only in the regions you need.
Subnet Design and IP Planning
Proper IP address planning prevents painful re-architectures later. Every VPC gets a set of subnets, and each subnet has a primary CIDR range and optional secondary ranges (used for GKE Pods and Services). Plan your ranges with future growth in mind. Expanding a subnet's CIDR range is supported in GCP (unlike some other providers), but it can only be expanded, never shrunk, and the expansion must not overlap with other subnets.
A good IP planning strategy allocates non-overlapping CIDR blocks for each environment and region, with room for growth. Consider using a hierarchical allocation scheme where you carve out large blocks for each environment and then subdivide them by region and purpose.
| Component | Recommended CIDR Size | Usable IPs | Notes |
|---|---|---|---|
| Primary subnet (VMs) | /20 to /24 | 4,094 to 254 | Size based on expected VM count |
| GKE Pod range (secondary) | /14 to /17 | 262K to 32K | Each node uses a /24 for Pods by default |
| GKE Service range (secondary) | /20 | 4,094 | One IP per Kubernetes Service |
| Private Service Connect | /24 | 254 | For managed services like Cloud SQL, Memorystore |
| Serverless VPC connector | /28 | 14 | For Cloud Functions and Cloud Run VPC access |
| Proxy-only subnet (for ILB) | /23 to /26 | 510 to 62 | Required for Envoy-based internal load balancers |
IP Address Planning Example
Here is a concrete IP address plan for a typical production environment spanning two regions. This plan allocates a 10.0.0.0/8 block across environments and regions with clear boundaries to avoid overlap:
| Environment | Region | Primary Subnet | GKE Pods (Secondary) | GKE Services (Secondary) |
|---|---|---|---|---|
| Production | us-central1 | 10.10.0.0/20 | 10.20.0.0/14 | 10.24.0.0/20 |
| Production | us-east1 | 10.10.16.0/20 | 10.28.0.0/14 | 10.32.0.0/20 |
| Staging | us-central1 | 10.40.0.0/20 | 10.48.0.0/14 | 10.52.0.0/20 |
| Development | us-central1 | 10.60.0.0/20 | 10.68.0.0/14 | 10.72.0.0/20 |
# Create custom mode VPC
gcloud compute networks create prod-vpc \
--subnet-mode=custom \
--bgp-routing-mode=global
# Primary workload subnet in us-central1
gcloud compute networks subnets create prod-us-central1 \
--network=prod-vpc \
--region=us-central1 \
--range=10.10.0.0/20 \
--secondary-range=gke-pods=10.20.0.0/14,gke-services=10.24.0.0/20 \
--enable-private-ip-google-access \
--enable-flow-logs \
--logging-flow-sampling=0.5
# Primary workload subnet in us-east1
gcloud compute networks subnets create prod-us-east1 \
--network=prod-vpc \
--region=us-east1 \
--range=10.10.16.0/20 \
--secondary-range=gke-pods=10.28.0.0/14,gke-services=10.32.0.0/20 \
--enable-private-ip-google-access \
--enable-flow-logs \
--logging-flow-sampling=0.5
# Database / managed services subnet
gcloud compute networks subnets create prod-db-us-central1 \
--network=prod-vpc \
--region=us-central1 \
--range=10.10.32.0/24 \
--purpose=PRIVATE \
--enable-private-ip-google-access
# Proxy-only subnet for internal Application Load Balancer
gcloud compute networks subnets create proxy-only-us-central1 \
--network=prod-vpc \
--region=us-central1 \
--range=10.10.33.0/24 \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVEEnable Private Google Access on Every Subnet
Always enable --enable-private-ip-google-access on every subnet. This allows VMs without external IPs to reach Google APIs and services (Cloud Storage, BigQuery, Artifact Registry) over Google's internal network rather than the public internet. There is no additional cost, and it eliminates the need for external IPs or NAT just to access GCP services.
Network Architecture Patterns
GCP supports several well-established patterns for multi-VPC architectures. Choosing the right one depends on your organizational structure, compliance requirements, and traffic flow needs. The pattern you choose at the beginning is difficult to change later, so invest time in understanding the tradeoffs.
Pattern 1: Shared VPC (Hub-Spoke)
Shared VPC is the most common pattern for enterprise GCP deployments. A host project owns the VPC, and service projects attach their resources to subnets in the host project. This centralizes network management while allowing teams to manage their own compute resources. Google recommends Shared VPC as the default pattern for most organizations.
Key advantages of Shared VPC:
- The network team manages firewalls, subnets, and routes centrally.
- Application teams deploy VMs, GKE clusters, and Cloud Run services into designated subnets without needing network-level permissions.
- All resources share the same VPC, so there are no peering limits or transitive routing issues.
- Firewall rules apply uniformly across all service projects, ensuring consistent security posture.
- IP address management is centralized, preventing CIDR conflicts between teams.
# Enable Shared VPC on the host project
gcloud compute shared-vpc enable host-project-id
# Attach a service project
gcloud compute shared-vpc associated-projects add service-project-id \
--host-project=host-project-id
# Grant a service project's service account access to a specific subnet
gcloud projects add-iam-policy-binding host-project-id \
--member="serviceAccount:service-12345@compute-system.iam.gserviceaccount.com" \
--role="roles/compute.networkUser" \
--condition='expression=resource.name.endsWith("prod-us-central1"),title=subnet-restriction'
# Grant GKE service account access for cluster creation in service project
gcloud projects add-iam-policy-binding host-project-id \
--member="serviceAccount:service-12345@container-engine-robot.iam.gserviceaccount.com" \
--role="roles/container.hostServiceAgentUser"
# List all service projects attached to a host project
gcloud compute shared-vpc list-associated-resources host-project-idShared VPC Limits
A single host project can have up to 1,000 service projects attached. Each service project can only be attached to one host project at a time. Plan your Shared VPC hierarchy carefully. For very large organizations, you may need multiple Shared VPCs (e.g., one per business unit or region), connected via VPC Peering or Network Connectivity Center.
Pattern 2: VPC Peering
VPC Peering connects two VPCs so resources can communicate using private IPs. It works across projects and even across organizations. However, VPC Peering is non-transitive: if VPC-A peers with VPC-B and VPC-B peers with VPC-C, VPC-A cannot reach VPC-C through VPC-B. Routes are exchanged between peered VPCs, but custom routes are not transitive either unless explicitly exported and imported.
Each VPC can have a maximum of 25 peering connections. If you need full mesh connectivity across many VPCs, consider NCC (Network Connectivity Center) instead. VPC Peering also exchanges subnet routes automatically, which means overlapping CIDR ranges between peered VPCs will cause a conflict and the peering will fail.
# Create peering from VPC-A to VPC-B
gcloud compute networks peerings create vpc-a-to-vpc-b \
--network=vpc-a \
--peer-network=vpc-b \
--peer-project=project-b \
--export-custom-routes \
--import-custom-routes
# Create the reciprocal peering from VPC-B to VPC-A
gcloud compute networks peerings create vpc-b-to-vpc-a \
--network=vpc-b \
--peer-network=vpc-a \
--peer-project=project-a \
--export-custom-routes \
--import-custom-routes \
--project=project-b
# Verify peering status
gcloud compute networks peerings list --network=vpc-a| Aspect | Shared VPC | VPC Peering | Network Connectivity Center |
|---|---|---|---|
| Network isolation | Same VPC, shared | Separate VPCs, connected | Separate VPCs, hub-managed |
| Transitive routing | Yes (same VPC) | No | Yes |
| Max connections | 1,000 service projects | 25 peers per VPC | Unlimited spokes |
| Central management | Yes (host project) | No (each VPC manages own) | Yes (hub) |
| Cross-org support | No (same org only) | Yes | Yes |
| Best for | Enterprise single-org | Cross-org, simple topologies | Complex multi-cloud/hybrid |
Pattern 3: Network Connectivity Center (NCC)
NCC acts as a central hub for managing connectivity across VPCs, on-premises networks, and other clouds. It supports transitive routing, making it the right choice when you have many VPCs that all need to communicate or when you need a single point of control for hybrid connectivity. NCC uses the concept of a hub with spokes, where each spoke can be a VPC, a VPN tunnel, an Interconnect attachment, or a Router Appliance instance.
# Create an NCC hub
gcloud network-connectivity hubs create my-hub \
--description="Central connectivity hub"
# Add a VPC spoke
gcloud network-connectivity spokes linked-vpc-network create vpc-spoke-1 \
--hub=my-hub \
--vpc-network=projects/project-a/global/networks/vpc-a \
--location=global
# Add another VPC spoke (these VPCs can now communicate transitively)
gcloud network-connectivity spokes linked-vpc-network create vpc-spoke-2 \
--hub=my-hub \
--vpc-network=projects/project-b/global/networks/vpc-b \
--location=global
# Add an Interconnect spoke for hybrid connectivity
gcloud network-connectivity spokes linked-interconnect-attachments create hybrid-spoke \
--hub=my-hub \
--interconnect-attachments=projects/project-a/regions/us-central1/interconnectAttachments/my-attachment \
--location=us-central1Firewall Rules and Policies
GCP firewalls are stateful and apply at the VM instance level, not the subnet level. This is a crucial distinction from AWS security groups, which are attached to ENIs. In GCP, firewall rules target VMs based on network tags, service accounts, or IP ranges. There are three layers of firewall controls, evaluated in priority order:
- Hierarchical Firewall Policies: defined at the organization or folder level. These take precedence and can enforce baseline rules across all projects. Hierarchical policies are the best way to implement organization-wide security requirements.
- Network Firewall Policies: attached to a VPC network. Good for rules that apply across the entire VPC. These are newer than VPC firewall rules and support features like FQDN-based rules and geo-location filtering.
- VPC Firewall Rules: the classic per-network rules. Applied per-VPC and evaluated by priority number (0-65535, lower number = higher priority).
Firewall Rule Evaluation Order
When a packet arrives at a VM, GCP evaluates firewall rules in this order: hierarchical policies (org level, then folder level), then network firewall policies, then VPC firewall rules. Within each layer, rules are evaluated by priority (lowest number first). The first matching rule determines the action (allow, deny, or delegate to next layer via “goto_next”).
# Create a hierarchical firewall policy at the organization level
gcloud compute firewall-policies create \
--organization=123456789012 \
--short-name="org-baseline" \
--description="Organization baseline firewall rules"
# Allow IAP for SSH access (org-wide)
gcloud compute firewall-policies rules create 100 \
--firewall-policy="org-baseline" \
--organization=123456789012 \
--action=allow \
--direction=INGRESS \
--src-ip-ranges=35.235.240.0/20 \
--layer4-configs=tcp:22 \
--description="Allow IAP SSH access"
# Deny all ingress from known bad IP ranges
gcloud compute firewall-policies rules create 200 \
--firewall-policy="org-baseline" \
--organization=123456789012 \
--action=deny \
--direction=INGRESS \
--src-ip-ranges=192.0.2.0/24,198.51.100.0/24 \
--layer4-configs=all \
--description="Block known malicious ranges"
# Associate the policy with the organization
gcloud compute firewall-policies associations create \
--firewall-policy="org-baseline" \
--organization=123456789012# Allow internal traffic within the VPC
gcloud compute firewall-rules create allow-internal \
--network=prod-vpc \
--action=ALLOW \
--rules=tcp,udp,icmp \
--source-ranges=10.10.0.0/16 \
--priority=1000
# Allow HTTPS ingress only to tagged instances
gcloud compute firewall-rules create allow-https-lb \
--network=prod-vpc \
--action=ALLOW \
--rules=tcp:443 \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=https-server \
--priority=900
# Allow health check traffic from Google's health check ranges
gcloud compute firewall-rules create allow-health-checks \
--network=prod-vpc \
--action=ALLOW \
--rules=tcp \
--source-ranges=130.211.0.0/22,35.191.0.0/16 \
--target-tags=health-check \
--priority=900
# Deny all egress except to Google APIs (defense in depth)
gcloud compute firewall-rules create deny-all-egress \
--network=prod-vpc \
--action=DENY \
--rules=all \
--direction=EGRESS \
--destination-ranges=0.0.0.0/0 \
--priority=65534Prefer Service Account Targeting Over Tags
Network tags can be modified by any user with compute.instances.setTags permission, which can bypass firewall rules. For production workloads, target firewall rules using service accounts instead. Service account assignments require IAM permissions, providing a stronger security boundary. Use --target-service-accounts instead of --target-tags for all production firewall rules.
Network Firewall Policies with Advanced Features
Network firewall policies support features not available in classic VPC firewall rules, including FQDN-based rules, geo-location filtering, and threat intelligence integration. These features make it possible to create more sophisticated security policies.
# Create a network firewall policy
gcloud compute network-firewall-policies create prod-fw-policy \
--global \
--description="Production network firewall policy"
# Allow egress only to specific FQDNs
gcloud compute network-firewall-policies rules create 100 \
--firewall-policy=prod-fw-policy \
--global-firewall-policy \
--action=allow \
--direction=EGRESS \
--dest-fqdns="*.googleapis.com,*.gcr.io,*.pkg.dev" \
--layer4-configs=tcp:443
# Block ingress from specific countries
gcloud compute network-firewall-policies rules create 200 \
--firewall-policy=prod-fw-policy \
--global-firewall-policy \
--action=deny \
--direction=INGRESS \
--src-region-codes=CN,RU,KP \
--layer4-configs=all
# Associate the policy with the VPC
gcloud compute network-firewall-policies associations create \
--firewall-policy=prod-fw-policy \
--network=prod-vpc \
--global-firewall-policyCloud NAT and Private Connectivity
In a well-architected GCP network, VMs should not have external IP addresses. Instead, outbound internet access is provided by Cloud NAT, a managed, software-defined NAT gateway that operates per-region. Cloud NAT is not an appliance. It is implemented in the network fabric itself, which means it does not create a single point of failure or bottleneck.
Cloud NAT provides high-availability egress without exposing instances to inbound traffic from the internet. Key sizing considerations include:
- Each NAT IP supports roughly 64,000 concurrent connections (ports). For high-throughput workloads, enable dynamic port allocation or provision additional NAT IPs.
- NAT logs can be enabled to track which instances are making outbound connections, useful for security auditing and identifying shadow IT.
- For accessing Google APIs without going through NAT, use Private Google Access (for standard APIs) or Private Service Connect (for a dedicated endpoint in your VPC).
- Cloud NAT supports endpoint-independent mapping, which is important for protocols like STUN and TURN used in real-time communications.
# Create a Cloud Router (required for Cloud NAT)
gcloud compute routers create prod-router \
--network=prod-vpc \
--region=us-central1
# Create Cloud NAT with automatic IP allocation
gcloud compute routers nats create prod-nat \
--router=prod-router \
--region=us-central1 \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging \
--min-ports-per-vm=2048 \
--enable-dynamic-port-allocation \
--max-ports-per-vm=65536 \
--log-filter=ERRORS_ONLY
# For static outbound IPs (required for allowlisting at third parties)
gcloud compute addresses create nat-ip-1 nat-ip-2 \
--region=us-central1
gcloud compute routers nats update prod-nat \
--router=prod-router \
--region=us-central1 \
--nat-external-ip-pool=nat-ip-1,nat-ip-2NAT Port Exhaustion
Cloud NAT port exhaustion is one of the most common networking issues in GCP. Symptoms include intermittent connection timeouts from VMs or GKE pods. Monitor the nat_allocation_failed metric in Cloud Monitoring and set up alerts. If you see port exhaustion, enable dynamic port allocation, increase --min-ports-per-vm, or add more NAT IPs. For GKE clusters, the high pod density can quickly exhaust NAT ports.
Private Google Access and Private Service Connect
GCP provides multiple mechanisms for accessing Google-managed services privately, without traversing the public internet. Understanding the differences between these options is essential for designing secure, compliant networks.
Private Google Access (PGA)
PGA allows VMs without external IPs to access Google APIs and services. It is enabled per-subnet and requires no additional infrastructure. PGA routes traffic to Google APIs through Google's internal network. There are three variants:
- Private Google Access: Accesses most Google APIs through the default domains (e.g.,
storage.googleapis.com). Enabled per-subnet. - Private Google Access for on-premises: Extends PGA to on-premises networks connected via VPN or Interconnect. Requires Cloud DNS configuration to route API traffic over the private connection.
- Private Service Connect (PSC): Creates dedicated private endpoints in your VPC with IP addresses from your own CIDR range. Provides the most control and is required for VPC Service Controls.
# Enable Private Google Access on a subnet (simplest option)
gcloud compute networks subnets update prod-us-central1 \
--region=us-central1 \
--enable-private-ip-google-access
# Create a Private Service Connect endpoint for Google APIs
gcloud compute addresses create google-apis-psc \
--global \
--purpose=PRIVATE_SERVICE_CONNECT \
--addresses=10.10.34.1 \
--network=prod-vpc
gcloud compute forwarding-rules create psc-google-apis \
--global \
--network=prod-vpc \
--address=google-apis-psc \
--target-google-apis-bundle=all-apis
# Create DNS entries to route Google API traffic through PSC
gcloud dns managed-zones create google-apis-zone \
--dns-name="googleapis.com." \
--visibility=private \
--networks=prod-vpc \
--description="Route Google APIs through PSC"
gcloud dns record-sets create "*.googleapis.com." \
--zone=google-apis-zone \
--type=CNAME \
--ttl=300 \
--rrdatas="googleapis.com."
gcloud dns record-sets create "googleapis.com." \
--zone=google-apis-zone \
--type=A \
--ttl=300 \
--rrdatas="10.10.34.1"VPC Flow Logs and Network Monitoring
VPC Flow Logs capture a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs are essential for network troubleshooting, security forensics, and cost optimization (identifying unexpected egress patterns).
Flow logs are configured per-subnet and include metadata about each flow: source and destination IPs, ports, protocol, bytes transferred, and the flow direction. You control the sampling rate (from 0 to 1.0) and the aggregation interval (5-second to 15-minute buckets).
# Enable flow logs on an existing subnet with recommended settings
gcloud compute networks subnets update prod-us-central1 \
--region=us-central1 \
--enable-flow-logs \
--logging-aggregation-interval=interval-5-sec \
--logging-flow-sampling=0.5 \
--logging-metadata=include-all-metadata \
--logging-filter-expr='!(inIpRange(connection.dest_ip, "10.0.0.0/8") && inIpRange(connection.src_ip, "10.0.0.0/8"))'
# Create a log sink to BigQuery for analysis
gcloud logging sinks create flow-logs-sink \
bigquery.googleapis.com/projects/my-project/datasets/network_logs \
--log-filter='resource.type="gce_subnetwork" AND logName:"vpc_flows"'-- Find top external destinations by bytes transferred
SELECT
jsonPayload.connection.dest_ip AS destination_ip,
jsonPayload.dest_location.country AS country,
SUM(CAST(jsonPayload.bytes_sent AS INT64)) AS total_bytes,
COUNT(*) AS flow_count
FROM `my-project.network_logs.compute_googleapis_com_vpc_flows`
WHERE
timestamp > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 24 HOUR)
AND NOT NET.IP_IN_NET(
NET.IP_FROM_STRING(jsonPayload.connection.dest_ip),
NET.IP_FROM_STRING('10.0.0.0'),
8
)
GROUP BY destination_ip, country
ORDER BY total_bytes DESC
LIMIT 50Flow Log Sampling Trade-offs
A sampling rate of 0.5 (50%) captures enough data for most monitoring and troubleshooting needs while keeping log volume manageable. For security-sensitive subnets handling PII or financial data, consider 1.0 (100%) sampling. For high-traffic subnets, consider 0.1 (10%) to control costs. Flow log costs scale linearly with the sampling rate and the volume of flows.
Hybrid Connectivity Patterns
Most enterprise networks require connectivity between GCP and on-premises data centers, other cloud providers, or colocation facilities. GCP provides three primary connectivity options, each with different latency, bandwidth, and cost characteristics.
| Option | Bandwidth | Latency | SLA | Setup Time | Cost |
|---|---|---|---|---|---|
| HA VPN | Up to 3 Gbps per tunnel | Depends on internet path | 99.99% (4-tunnel config) | Hours | Low (tunnel + egress) |
| Partner Interconnect | 50 Mbps to 50 Gbps | Low (provider backbone) | 99.9% - 99.99% | Days to weeks | Medium |
| Dedicated Interconnect | 10 or 100 Gbps per link | Lowest (direct fiber) | 99.99% (redundant config) | Weeks to months | High (port + egress) |
# Create HA VPN gateway
gcloud compute vpn-gateways create my-ha-vpn \
--network=prod-vpc \
--region=us-central1
# Create external VPN gateway (on-prem side)
gcloud compute external-vpn-gateways create on-prem-gw \
--interfaces=0=203.0.113.1,1=203.0.113.2
# Create Cloud Router with BGP ASN
gcloud compute routers create vpn-router \
--network=prod-vpc \
--region=us-central1 \
--asn=65001
# Create VPN tunnels (4 tunnels for full HA)
for i in 0 1; do
for j in 0 1; do
gcloud compute vpn-tunnels create tunnel-${i}-${j} \
--vpn-gateway=my-ha-vpn \
--vpn-gateway-interface=${i} \
--peer-external-gateway=on-prem-gw \
--peer-external-gateway-interface=${j} \
--shared-secret=MY_SHARED_SECRET \
--router=vpn-router \
--region=us-central1 \
--ike-version=2
done
done
# Add BGP peers for each tunnel
gcloud compute routers add-bgp-peer vpn-router \
--peer-name=on-prem-peer-0 \
--interface=tunnel-0-0-interface \
--peer-ip-address=169.254.0.2 \
--peer-asn=65002 \
--region=us-central1Terraform VPC Module
Managing VPC configurations manually through gcloud or the console is error-prone and unauditable. For production environments, define your entire network infrastructure in Terraform. Here is a reusable VPC module that incorporates the best practices discussed in this guide.
variable "network_name" {
type = string
}
variable "project_id" {
type = string
}
variable "subnets" {
type = map(object({
cidr = string
region = string
secondary_ranges = optional(map(string), {})
flow_logs = optional(bool, true)
}))
}
resource "google_compute_network" "vpc" {
name = var.network_name
auto_create_subnetworks = false
routing_mode = "GLOBAL"
project = var.project_id
}
resource "google_compute_subnetwork" "subnets" {
for_each = var.subnets
name = each.key
ip_cidr_range = each.value.cidr
region = each.value.region
network = google_compute_network.vpc.id
private_ip_google_access = true
dynamic "secondary_ip_range" {
for_each = each.value.secondary_ranges
content {
range_name = secondary_ip_range.key
ip_cidr_range = secondary_ip_range.value
}
}
dynamic "log_config" {
for_each = each.value.flow_logs ? [1] : []
content {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
}
resource "google_compute_router" "router" {
for_each = toset(distinct([for s in var.subnets : s.region]))
name = "${var.network_name}-router-${each.key}"
network = google_compute_network.vpc.id
region = each.key
}
resource "google_compute_router_nat" "nat" {
for_each = google_compute_router.router
name = "${each.value.name}-nat"
router = each.value.name
region = each.value.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
enable_dynamic_port_allocation = true
min_ports_per_vm = 2048
max_ports_per_vm = 65536
log_config {
enable = true
filter = "ERRORS_ONLY"
}
}Network Design Checklist
Before going to production with your VPC design, verify each item on this checklist. Missing any of these can lead to security incidents, connectivity issues, or costly re-architectures later.
| Category | Check | Priority |
|---|---|---|
| VPC Mode | Using custom-mode VPC (not auto mode) | Critical |
| IP Planning | CIDR ranges planned for growth with no overlaps | Critical |
| Private Access | Private Google Access enabled on all subnets | High |
| Flow Logs | VPC Flow Logs enabled for visibility | High |
| NAT | Cloud NAT deployed with dynamic port allocation | High |
| Firewalls | Hierarchical firewall policies for org-wide rules | High |
| DNS | Cloud DNS configured for internal resolution | Medium |
| No External IPs | Org policy blocking external IPs on VMs | High |
| IaC | All network config managed via Terraform | High |
Network Design Is Hard to Change
VPC architecture decisions are among the hardest to change after deployment. CIDR ranges cannot be shrunk, Shared VPC host projects cannot be easily swapped, and peering topologies affect routing throughout your organization. Invest time upfront in planning, document your decisions, and get sign-off from security and operations teams before deploying production networks. A well-designed network will serve you for years; a poorly designed one will create technical debt from day one.
Key Takeaways
- 1GCP VPC networks are global; subnets are regional, but the VPC spans all regions automatically.
- 2Shared VPC centralizes network management while letting teams manage their own projects.
- 3VPC Network Peering connects networks without gateways but is non-transitive.
- 4Private Google Access enables resources without external IPs to reach Google APIs.
- 5Firewall rules are global and use tags or service accounts for targeting.
- 6Use Cloud NAT for outbound internet access without assigning external IPs to VMs.
Frequently Asked Questions
How do GCP VPC networks differ from AWS VPCs?
What is Shared VPC in GCP?
When should I use VPC Peering vs Shared VPC?
What is Private Google Access?
How do GCP firewall rules work?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.