GCP Shared VPC Design
Guide to GCP Shared VPC covering host/service project architecture, subnet delegation, IAM configuration, firewall policies, Cloud NAT, GKE integration, and troubleshooting.
Prerequisites
- Understanding of GCP VPC networking concepts
- Experience with GCP IAM and service accounts
- Familiarity with GCP project and organization structure
What Is GCP Shared VPC?
Shared VPC is a GCP networking feature that allows an organization to connect resources from multiple projects to a common VPC network. A centralized network team manages the VPC, subnets, firewall rules, and routes in a host project, while application teams deploy their resources (VMs, GKE clusters, Cloud SQL instances) inservice projects using subnets from the shared VPC.
This separation of concerns is fundamental to enterprise GCP architecture. Without Shared VPC, each project creates its own VPC, leading to IP address conflicts, redundant firewall rules, no centralized network visibility, and difficulty establishing consistent connectivity to on-premises networks. With Shared VPC, the network team maintains a single, well-managed network fabric that all projects use.
This guide covers Shared VPC design from the ground up: host and service project configuration, subnet delegation strategies, IAM permissions, firewall policies, Private Google Access, Cloud NAT integration, GKE on Shared VPC, and common troubleshooting patterns.
Shared VPC vs. VPC Peering
Shared VPC and VPC Peering serve different purposes. Shared VPC shares a single VPC across multiple projects within the same organization. VPC Peering connects two separate VPCs (which can be in different organizations). Use Shared VPC for your primary network architecture and VPC Peering for connecting to partner networks or third-party services. Shared VPC is free; VPC Peering charges for cross-region traffic.
Architecture and Project Structure
A Shared VPC deployment consists of one host project and one or more service projects, all within the same GCP organization. The host project owns the VPC network, subnets, firewall rules, Cloud NAT, Cloud VPN/Interconnect, and Cloud Router. Service projects contain the compute resources (VMs, GKE, Cloud Run with VPC connector, Cloud SQL) that use subnets from the shared VPC.
Recommended Project Layout
| Project | Type | Contains |
|---|---|---|
| network-host-prod | Host Project | VPC, subnets, firewall rules, Cloud NAT, VPN/Interconnect |
| app-frontend-prod | Service Project | GKE cluster, load balancers |
| app-backend-prod | Service Project | Compute Engine VMs, Cloud SQL |
| data-platform-prod | Service Project | Dataflow, Dataproc, BigQuery |
| shared-services-prod | Service Project | CI/CD runners, monitoring, logging |
# Step 1: Enable Shared VPC on the host project
# Requires Organization Admin or Shared VPC Admin role
gcloud compute shared-vpc enable network-host-prod
# Step 2: Associate service projects with the host project
gcloud compute shared-vpc associated-projects add app-frontend-prod \
--host-project network-host-prod
gcloud compute shared-vpc associated-projects add app-backend-prod \
--host-project network-host-prod
gcloud compute shared-vpc associated-projects add data-platform-prod \
--host-project network-host-prod
# Verify the configuration
gcloud compute shared-vpc list-associated-resources network-host-prod
# List all service projects
gcloud compute shared-vpc get-host-project app-frontend-prodSubnet Design and Delegation
Subnet design is the most critical decision in a Shared VPC deployment. Each subnet exists in a specific region and has a primary IP range for VM NICs, plus optional secondary ranges for GKE Pods and Services. Subnets are the unit of delegation: you grant service project users access to specific subnets, not the entire VPC.
Plan your IP address space carefully because changing subnet ranges later requires downtime. Use a hierarchical IP addressing scheme that accommodates growth. A common pattern is to allocate a /16 per region, divided into /24 subnets for different teams and environments.
# Create the shared VPC network (custom mode, not auto mode)
gcloud compute networks create shared-vpc-network \
--project network-host-prod \
--subnet-mode custom \
--bgp-routing-mode global
# Create subnets for different teams/workloads
# Frontend team subnet (us-central1)
gcloud compute networks subnets create snet-frontend-usc1 \
--project network-host-prod \
--network shared-vpc-network \
--region us-central1 \
--range 10.1.0.0/24 \
--secondary-range pods=10.100.0.0/16,services=10.200.0.0/20 \
--enable-private-ip-google-access \
--enable-flow-logs \
--logging-aggregation-interval interval-5-sec \
--logging-flow-sampling 0.5 \
--logging-metadata include-all
# Backend team subnet (us-central1)
gcloud compute networks subnets create snet-backend-usc1 \
--project network-host-prod \
--network shared-vpc-network \
--region us-central1 \
--range 10.2.0.0/24 \
--enable-private-ip-google-access \
--enable-flow-logs
# Data platform subnet (us-central1)
gcloud compute networks subnets create snet-data-usc1 \
--project network-host-prod \
--network shared-vpc-network \
--region us-central1 \
--range 10.3.0.0/24 \
--enable-private-ip-google-access
# Shared services subnet (us-central1)
gcloud compute networks subnets create snet-shared-usc1 \
--project network-host-prod \
--network shared-vpc-network \
--region us-central1 \
--range 10.10.0.0/24 \
--enable-private-ip-google-access
# US East region subnets (for multi-region)
gcloud compute networks subnets create snet-frontend-use4 \
--project network-host-prod \
--network shared-vpc-network \
--region us-east4 \
--range 10.11.0.0/24 \
--secondary-range pods=10.101.0.0/16,services=10.201.0.0/20 \
--enable-private-ip-google-accessSize GKE Secondary Ranges Carefully
GKE requires large secondary IP ranges: each node uses a /24 from the Pod range (256 pod IPs per node), and the Services range needs enough IPs for all Kubernetes services. A cluster with 100 nodes needs a Pod range of at least /8 (which is huge). Use --max-pods-per-node=32 to reduce the per-node allocation to /27 (32 IPs), allowing a /14 Pod range to support over 1,000 nodes. Plan these ranges before creating the subnet because they cannot be changed later.
IAM Configuration
IAM is the access control mechanism for Shared VPC. Service project users need thecompute.networkUser role on specific subnets (not the entire host project) to deploy resources into those subnets. This subnet-level delegation is the key security feature: teams can only deploy into their designated subnets.
# Grant subnet-level access to the frontend team
# This allows the service project's default service account and team members
# to create resources in the frontend subnet
# For the team's group
gcloud compute networks subnets add-iam-policy-binding snet-frontend-usc1 \
--project network-host-prod \
--region us-central1 \
--member "group:frontend-team@company.com" \
--role "roles/compute.networkUser"
# For the service project's default compute service account
gcloud compute networks subnets add-iam-policy-binding snet-frontend-usc1 \
--project network-host-prod \
--region us-central1 \
--member "serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
--role "roles/compute.networkUser"
# For GKE: the GKE service account needs additional permissions
# in the host project
gcloud projects add-iam-policy-binding network-host-prod \
--member "serviceAccount:service-FRONTEND_PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com" \
--role "roles/container.hostServiceAgentUser"
# Grant network user role at subnet level for GKE service account
gcloud compute networks subnets add-iam-policy-binding snet-frontend-usc1 \
--project network-host-prod \
--region us-central1 \
--member "serviceAccount:service-FRONTEND_PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com" \
--role "roles/compute.networkUser"
# For Cloud SQL: the service networking service account needs permissions
gcloud projects add-iam-policy-binding network-host-prod \
--member "serviceAccount:service-BACKEND_PROJECT_NUMBER@service-networking.iam.gserviceaccount.com" \
--role "roles/servicenetworking.serviceAgent"
# Verify IAM bindings on a subnet
gcloud compute networks subnets get-iam-policy snet-frontend-usc1 \
--project network-host-prod \
--region us-central1IAM Roles Summary
| Role | Assigned To | Scope | Purpose |
|---|---|---|---|
| Shared VPC Admin | Network team | Organization or folder | Enable/disable Shared VPC, manage associations |
| Network Admin | Network team | Host project | Manage VPC, subnets, firewall rules, routes |
| Compute Network User | Service project users/SAs | Specific subnets | Deploy resources into shared subnets |
| Host Service Agent User | GKE service agent | Host project | GKE manages network resources in host project |
| Security Admin | Security team | Host project | Manage firewall rules and security policies |
Firewall Policies
Firewall rules in a Shared VPC are managed centrally in the host project. GCP supports three types of firewall controls: hierarchical firewall policies (at organization or folder level), VPC firewall rules (at network level), and network firewall policies (at network level, newer and recommended). Use hierarchical policies for organization-wide baselines and VPC-level rules for network-specific controls.
# Create a hierarchical firewall policy at the organization level
gcloud compute firewall-policies create \
--organization ORG_ID \
--short-name "org-baseline" \
--description "Organization-wide baseline firewall policy"
# Add rules to the policy
# Allow internal traffic between all shared VPC subnets
gcloud compute firewall-policies rules create 1000 \
--firewall-policy "org-baseline" \
--organization ORG_ID \
--action allow \
--direction INGRESS \
--src-ip-ranges "10.0.0.0/8" \
--dest-ip-ranges "10.0.0.0/8" \
--layer4-configs all \
--description "Allow internal RFC1918 traffic"
# Allow IAP for SSH/RDP (Identity-Aware Proxy)
gcloud compute firewall-policies rules create 2000 \
--firewall-policy "org-baseline" \
--organization ORG_ID \
--action allow \
--direction INGRESS \
--src-ip-ranges "35.235.240.0/20" \
--layer4-configs tcp:22,tcp:3389 \
--description "Allow IAP SSH and RDP"
# Allow Google health check ranges (for load balancers)
gcloud compute firewall-policies rules create 3000 \
--firewall-policy "org-baseline" \
--organization ORG_ID \
--action allow \
--direction INGRESS \
--src-ip-ranges "35.191.0.0/16,130.211.0.0/22" \
--layer4-configs tcp:80,tcp:443,tcp:8080 \
--description "Allow GCP health checks"
# Deny all other ingress (catch-all)
gcloud compute firewall-policies rules create 65534 \
--firewall-policy "org-baseline" \
--organization ORG_ID \
--action deny \
--direction INGRESS \
--src-ip-ranges "0.0.0.0/0" \
--layer4-configs all \
--description "Default deny all ingress"
# Associate the policy with the organization
gcloud compute firewall-policies associations create \
--firewall-policy "org-baseline" \
--organization ORG_ID
# VPC-level firewall rules for specific use cases
gcloud compute firewall-rules create allow-frontend-http \
--project network-host-prod \
--network shared-vpc-network \
--direction INGRESS \
--action allow \
--rules tcp:80,tcp:443 \
--source-ranges 0.0.0.0/0 \
--target-tags frontend-web \
--description "Allow HTTP/HTTPS to frontend VMs"
# Use network tags or service accounts for targeting
gcloud compute firewall-rules create allow-backend-from-frontend \
--project network-host-prod \
--network shared-vpc-network \
--direction INGRESS \
--action allow \
--rules tcp:8080,tcp:8443 \
--source-service-accounts "frontend-sa@app-frontend-prod.iam.gserviceaccount.com" \
--target-service-accounts "backend-sa@app-backend-prod.iam.gserviceaccount.com" \
--description "Allow backend access from frontend service account"Use Service Account-Based Firewall Rules
Prefer service account-based firewall rules over network tag-based rules. Network tags can be set by anyone with compute.instanceAdmin role, potentially allowing unauthorized access. Service account-based rules use IAM identity, which is harder to spoof and provides better audit trails. Any VM running as a specific service account automatically gets the corresponding firewall rules.
Cloud NAT and Internet Access
In a Shared VPC, Cloud NAT is configured in the host project and provides outbound internet access for resources in private subnets. You create one Cloud NAT gateway per region, and it applies to all subnets in that region (or a specific set of subnets).
# Create a Cloud Router (required for Cloud NAT)
gcloud compute routers create cloud-router-usc1 \
--project network-host-prod \
--network shared-vpc-network \
--region us-central1
# Create Cloud NAT with automatic IP allocation
gcloud compute routers nats create cloud-nat-usc1 \
--project network-host-prod \
--router cloud-router-usc1 \
--region us-central1 \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips \
--min-ports-per-vm 2048 \
--max-ports-per-vm 4096 \
--enable-logging \
--log-filter ERRORS_ONLY
# Or configure NAT for specific subnets only
gcloud compute routers nats create cloud-nat-usc1 \
--project network-host-prod \
--router cloud-router-usc1 \
--region us-central1 \
--nat-custom-subnet-ip-ranges \
snet-backend-usc1,snet-data-usc1 \
--auto-allocate-nat-external-ipsGKE on Shared VPC
Running GKE clusters in service projects on Shared VPC subnets is one of the most common patterns and requires careful configuration. The GKE cluster uses subnets and secondary ranges from the host project, but the cluster itself and its workloads live in the service project.
# Create a private GKE cluster on Shared VPC
gcloud container clusters create frontend-cluster \
--project app-frontend-prod \
--region us-central1 \
--network "projects/network-host-prod/global/networks/shared-vpc-network" \
--subnetwork "projects/network-host-prod/regions/us-central1/subnetworks/snet-frontend-usc1" \
--cluster-secondary-range-name pods \
--services-secondary-range-name services \
--enable-private-nodes \
--enable-private-endpoint \
--master-ipv4-cidr 172.16.0.0/28 \
--enable-master-authorized-networks \
--master-authorized-networks 10.10.0.0/24 \
--enable-ip-alias \
--workload-pool "app-frontend-prod.svc.id.goog" \
--max-pods-per-node 32 \
--num-nodes 3 \
--machine-type e2-standard-4 \
--release-channel regular
# Important: Create a firewall rule for GKE master to node communication
# The master CIDR needs to reach nodes on ports 443 (API) and 10250 (kubelet)
gcloud compute firewall-rules create allow-gke-master-to-nodes \
--project network-host-prod \
--network shared-vpc-network \
--direction INGRESS \
--action allow \
--rules tcp:443,tcp:10250 \
--source-ranges 172.16.0.0/28 \
--target-tags "gke-frontend-cluster-*" \
--description "Allow GKE master to communicate with nodes"GKE Shared VPC Requirements
GKE on Shared VPC has specific IAM requirements that are easy to miss. The GKE service account (service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com) needs roles/container.hostServiceAgentUser on the host project ANDroles/compute.networkUser on the subnet. If you use Private Service Connect for the GKE master, the service networking agent also needs permissions. Missing any of these causes cluster creation to fail with cryptic permission errors.
Private Service Connect and Private Google Access
Private Google Access allows VMs without external IPs to reach Google APIs (Cloud Storage, BigQuery, Pub/Sub) over Google's internal network. It should be enabled on every subnet in a Shared VPC. Private Service Connect takes this further by providing private endpoints for Google APIs and your own published services.
# Enable Private Google Access on all subnets
for SUBNET in snet-frontend-usc1 snet-backend-usc1 snet-data-usc1 snet-shared-usc1; do
gcloud compute networks subnets update $SUBNET \
--project network-host-prod \
--region us-central1 \
--enable-private-ip-google-access
done
# Create a Private Service Connect endpoint for Google APIs
gcloud compute addresses create psc-google-apis \
--project network-host-prod \
--global \
--purpose PRIVATE_SERVICE_CONNECT \
--addresses 10.255.255.1 \
--network shared-vpc-network
gcloud compute forwarding-rules create psc-google-apis-rule \
--project network-host-prod \
--global \
--network shared-vpc-network \
--address psc-google-apis \
--target-google-apis-bundle all-apis
# Configure Cloud DNS to resolve *.googleapis.com to the PSC endpoint
gcloud dns managed-zones create googleapis-psc \
--project network-host-prod \
--dns-name "googleapis.com." \
--visibility private \
--networks shared-vpc-network \
--description "Private DNS for Google APIs via PSC"
gcloud dns record-sets create "*.googleapis.com." \
--project network-host-prod \
--zone googleapis-psc \
--type A \
--ttl 300 \
--rrdatas "10.255.255.1"
# Set up Private Service Access for managed services (Cloud SQL, Memorystore)
gcloud compute addresses create private-service-range \
--project network-host-prod \
--global \
--purpose VPC_PEERING \
--prefix-length 20 \
--network shared-vpc-network
gcloud services vpc-peerings connect \
--project network-host-prod \
--service servicenetworking.googleapis.com \
--ranges private-service-range \
--network shared-vpc-networkMonitoring and Troubleshooting
Shared VPC troubleshooting requires understanding that network resources are in the host project while compute resources are in service projects. The most common issues are missing IAM permissions, missing firewall rules, and incorrect subnet references.
# Verify connectivity between two VMs across service projects
gcloud compute ssh vm-frontend --project app-frontend-prod \
-- ping -c 3 BACKEND_VM_INTERNAL_IP
# Check if a firewall rule allows specific traffic
gcloud compute firewall-rules list \
--project network-host-prod \
--filter="network=shared-vpc-network" \
--format="table(name, direction, sourceRanges.list():label=SRC, allowed[].map().firewall_rule().list():label=ALLOW)"
# Test firewall rule evaluation
gcloud compute network-firewall-policies get-effective-firewalls \
--project network-host-prod \
--network shared-vpc-network
# VPC Flow Logs analysis (in Cloud Logging)
# resource.type="gce_subnetwork"
# AND logName="projects/network-host-prod/logs/compute.googleapis.com%2Fvpc_flows"
# AND jsonPayload.connection.dest_ip="10.2.0.5"
# Check Shared VPC associations
gcloud compute shared-vpc list-associated-resources network-host-prod \
--format="table(id, type)"
# Verify subnet IAM bindings
gcloud compute networks subnets get-iam-policy snet-frontend-usc1 \
--project network-host-prod \
--region us-central1 \
--format=jsonUse Connectivity Tests
GCP Network Intelligence Center provides Connectivity Tests that analyze your network configuration and determine if traffic between two endpoints will be allowed or denied. Run gcloud network-management connectivity-tests create to test paths between VMs, checking firewall rules, routes, and forwarding rules automatically. This saves hours of manual troubleshooting.
Best Practices
One host project per environment. Use separate host projects for production and non-production environments. This prevents network changes in development from affecting production traffic.
Enable VPC Flow Logs on all subnets. Flow logs provide visibility into traffic patterns, help with troubleshooting, and are required for compliance in many frameworks. Set sampling to 50% for cost-effective monitoring.
Use hierarchical firewall policies. Define organization-wide security baselines (allow IAP, deny known malicious IPs, allow health checks) at the organization or folder level. Use VPC-level rules for network-specific controls.
Document IP address allocation. Maintain a central IPAM document or use GCP's built-in IP Address Management. Overlapping ranges cause routing conflicts that are difficult to resolve.
Limit network admin access. Only the network team should havecompute.networkAdmin on the host project. Service project teams should only have compute.networkUser on their specific subnets.
Key Takeaways
- 1Shared VPC centralizes network management in a host project while service projects deploy resources into shared subnets.
- 2Subnet-level IAM delegation controls which teams can deploy into which subnets.
- 3GKE on Shared VPC requires specific IAM roles for the GKE service agent on the host project.
- 4Hierarchical firewall policies at the organization level provide baseline security across all projects.
- 5Service account-based firewall rules are more secure than network tag-based rules.
- 6Private Google Access and Private Service Connect should be enabled on all subnets.
Frequently Asked Questions
What is the difference between Shared VPC and VPC Peering?
How many host projects can an organization have?
Can I use Shared VPC with GKE?
How do I troubleshoot connectivity in Shared VPC?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.