Skip to main content
GCPNetworkingadvanced

GCP Networking Deep Dive

Advanced networking with Cloud Interconnect, Private Service Connect, and Cloud NAT.

CloudToolStack Team30 min readPublished Feb 22, 2026

Prerequisites

GCP Networking Architecture

Google Cloud's networking stack is fundamentally different from traditional cloud providers. It is built on the same infrastructure that powers Google Search, YouTube, and Gmail: a global software-defined network spanning more than 180 points of presence and connected by a private fiber backbone. Understanding this architecture is critical for designing low-latency, high-throughput systems.

Key architectural differences from other clouds: VPCs are global (not regional), load balancers use Anycast IPs for global distribution, and GCP offers two distinct network tiers (Premium and Standard) that control how traffic traverses between users and your infrastructure. These differences mean that networking patterns that work well on AWS or Azure may not be optimal on GCP, and vice versa.

GCP's networking is entirely software-defined. There are no physical appliances for load balancers, NAT gateways, or firewalls; everything is implemented in the network fabric. This means there are no single points of failure, no manual capacity planning for network components, and no throughput ceilings from hardware limitations.

GCP VPC Network Design Patterns

Network Service Tiers

GCP is the only major cloud provider that offers a choice of network tiers. This decision significantly affects latency, throughput, and cost. The tier is selected per-resource (per external IP address or forwarding rule), not per-project, so you can mix tiers for different workloads.

AspectPremium TierStandard Tier
RoutingTraffic enters Google's network at the nearest edge POPTraffic routes over the public internet to the VM's region
LatencyLower (Google backbone from edge to data center)Higher (public internet routing)
Global LB supportYes (Anycast IP, single IP for all regions)No (regional LBs only)
SLA99.99% for Global LB99.9% for Regional LB
Cost (egress)Higher (~$0.085-0.12/GB to internet)Lower (~$0.045-0.08/GB to internet)
Best forProduction, global audiencesDev/test, regional-only audiences

How Premium Tier Works

When a user in Tokyo accesses your service in us-central1 on Premium Tier, the traffic enters Google's network at the Tokyo POP (point of presence) and traverses Google's private fiber backbone all the way to the data center. This is significantly faster and more reliable than Standard Tier, where the traffic would traverse the public internet across multiple ISP networks before reaching Google's network in the us-central1 region.

The latency difference is typically 50-200ms depending on the user's location relative to the nearest Google POP. For global applications serving users across continents, this difference is significant.

Premium Tier Is the Default

Premium Tier is the default for all resources. To use Standard Tier, you must explicitly select it when creating external IPs or forwarding rules. For latency-sensitive production workloads serving global users, Premium Tier typically justifies the additional cost. Standard Tier is best for non-production environments or workloads serving a single geographic region.

Load Balancing Deep Dive

GCP offers a sophisticated load balancing system with multiple proxy and passthrough modes. Unlike traditional hardware load balancers, GCP LBs are software-defined and scale automatically. They handle millions of requests per second without pre-warming or capacity planning. Choosing the right type depends on your protocol, scope, and backend type.

Load Balancer TypeScopeProtocolProxy/PassthroughUse Case
External Application LBGlobalHTTP/S, HTTP/2, gRPCProxy (Envoy-based)Web apps, APIs, microservices
External Network LB (proxy)GlobalTCP, SSLProxyNon-HTTP TCP services, SSL offloading
External Network LB (passthrough)RegionalTCP, UDPPassthroughGame servers, UDP services, source IP preservation
Internal Application LBRegional / Cross-regionHTTP/S, HTTP/2, gRPCProxy (Envoy-based)Internal microservices, service mesh
Internal Network LB (passthrough)RegionalTCP, UDPPassthroughInternal TCP/UDP backends, NFS

External Application Load Balancer

The External Application Load Balancer (formerly HTTP(S) LB) is the most commonly used load balancer in GCP. It is based on Google's Envoy proxy and provides Layer 7 features including:

  • URL-based routing (route different paths to different backends)
  • Host-based routing (route different domains to different backends)
  • Managed SSL certificates (automatic provisioning and renewal)
  • Cloud CDN integration for content caching
  • Cloud Armor integration for DDoS protection and WAF
  • WebSocket and gRPC support
  • Connection draining for graceful backend removal
  • Custom request/response headers
Create a global external Application Load Balancer
# 1. Create a health check
gcloud compute health-checks create http my-hc \
  --port=8080 \
  --request-path=/healthz \
  --check-interval=10s \
  --healthy-threshold=2 \
  --unhealthy-threshold=3

# 2. Create a backend service
gcloud compute backend-services create my-backend \
  --protocol=HTTP \
  --port-name=http \
  --health-checks=my-hc \
  --global \
  --enable-cdn \
  --connection-draining-timeout=300

# 3. Add serverless NEG (for Cloud Run backend)
gcloud compute network-endpoint-groups create my-serverless-neg \
  --region=us-central1 \
  --network-endpoint-type=serverless \
  --cloud-run-service=my-api

gcloud compute backend-services add-backend my-backend \
  --global \
  --network-endpoint-group=my-serverless-neg \
  --network-endpoint-group-region=us-central1

# 4. Create URL map with path-based routing
gcloud compute url-maps create my-url-map \
  --default-service=my-backend

gcloud compute url-maps add-path-matcher my-url-map \
  --path-matcher-name=api-paths \
  --default-service=my-backend \
  --path-rules="/api/*=my-api-backend,/static/*=my-cdn-backend"

# 5. Create managed SSL certificate
gcloud compute ssl-certificates create my-cert \
  --domains=api.example.com,www.example.com \
  --global

# 6. Create target HTTPS proxy
gcloud compute target-https-proxies create my-https-proxy \
  --url-map=my-url-map \
  --ssl-certificates=my-cert

# 7. Create forwarding rule (Anycast IP)
gcloud compute forwarding-rules create my-https-fr \
  --global \
  --target-https-proxy=my-https-proxy \
  --ports=443

# 8. Create HTTP-to-HTTPS redirect
gcloud compute url-maps import http-redirect \
  --source=/dev/stdin <<EOF
name: http-redirect
defaultUrlRedirect:
  httpsRedirect: true
  redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
EOF

gcloud compute target-http-proxies create http-redirect-proxy \
  --url-map=http-redirect

gcloud compute forwarding-rules create http-redirect-fr \
  --global \
  --target-http-proxy=http-redirect-proxy \
  --ports=80

Cloud Armor: DDoS Protection and WAF

Cloud Armor provides DDoS protection and Web Application Firewall (WAF) capabilities for applications behind the External Application Load Balancer. It is the GCP equivalent of AWS WAF + Shield.

Configure Cloud Armor security policies
# Create a Cloud Armor security policy
gcloud compute security-policies create my-security-policy \
  --description="Production WAF policy"

# Block traffic from specific countries
gcloud compute security-policies rules create 1000 \
  --security-policy=my-security-policy \
  --expression="origin.region_code == 'CN' || origin.region_code == 'RU'" \
  --action=deny-403 \
  --description="Block traffic from specific regions"

# Rate limiting: max 100 requests per minute per IP
gcloud compute security-policies rules create 2000 \
  --security-policy=my-security-policy \
  --expression="true" \
  --action=throttle \
  --rate-limit-threshold-count=100 \
  --rate-limit-threshold-interval-sec=60 \
  --conform-action=allow \
  --exceed-action=deny-429 \
  --enforce-on-key=IP

# OWASP Top 10 protection (preconfigured WAF rules)
gcloud compute security-policies rules create 3000 \
  --security-policy=my-security-policy \
  --expression="evaluatePreconfiguredExpr('sqli-v33-stable')" \
  --action=deny-403 \
  --description="Block SQL injection attacks"

gcloud compute security-policies rules create 3001 \
  --security-policy=my-security-policy \
  --expression="evaluatePreconfiguredExpr('xss-v33-stable')" \
  --action=deny-403 \
  --description="Block XSS attacks"

# Attach the policy to a backend service
gcloud compute backend-services update my-backend \
  --security-policy=my-security-policy \
  --global

Cloud Armor Adaptive Protection

Cloud Armor offers Adaptive Protection, which uses machine learning to detect and mitigate Layer 7 DDoS attacks automatically. It analyzes traffic patterns, identifies anomalies, and suggests protection rules. Enable it on production security policies with --enable-layer7-ddos-defense. Adaptive Protection is included with Cloud Armor Enterprise tier.

Private Connectivity Options

GCP provides multiple paths for private connectivity to managed services, on-premises networks, and other clouds. Each has different latency, bandwidth, and cost characteristics. Choosing the right connectivity option is one of the most impactful networking decisions for hybrid and multi-cloud architectures.

Private Service Connect (PSC)

PSC creates private endpoints in your VPC that route to Google-managed services (Cloud SQL, GKE, Vertex AI) or to your own services published in another VPC. Traffic stays entirely within Google's network and uses private IP addresses from your VPC's CIDR range. PSC is the most secure and flexible private connectivity option.

PSC supports two modes:

  • PSC for Google APIs: Creates a single endpoint in your VPC that provides access to all Google APIs (or a specific bundle). Replaces the need for Private Google Access.
  • PSC for Published Services: Allows you to publish a service in one VPC and consume it from another VPC via a private endpoint. Works across projects and organizations.
Set up Private Service Connect
# PSC for Google APIs
gcloud compute addresses create psc-google-apis \
  --global \
  --purpose=PRIVATE_SERVICE_CONNECT \
  --addresses=10.10.34.1 \
  --network=prod-vpc

gcloud compute forwarding-rules create psc-apis-fr \
  --global \
  --network=prod-vpc \
  --address=psc-google-apis \
  --target-google-apis-bundle=all-apis

# PSC for a published service (consumer side)
gcloud compute addresses create my-psc-address \
  --region=us-central1 \
  --subnet=prod-us-central1 \
  --addresses=10.10.0.100

gcloud compute forwarding-rules create my-psc-endpoint \
  --region=us-central1 \
  --network=prod-vpc \
  --address=my-psc-address \
  --target-service-attachment=projects/provider-project/regions/us-central1/serviceAttachments/my-service

Cloud Interconnect

Dedicated Interconnect provides a physical, private connection between your data center and Google's network. It supports 10 Gbps or 100 Gbps circuits and offers an SLA of 99.99% with redundant connections. Partner Interconnect is available through service providers for lower bandwidth needs (50 Mbps to 50 Gbps).

FeatureDedicated InterconnectPartner Interconnect
Bandwidth10 or 100 Gbps per link50 Mbps to 50 Gbps
Physical connectionDirect to Google at colocation facilityThrough a service provider
SLA99.9% (single) / 99.99% (redundant)99.9% (single) / 99.99% (redundant)
Setup timeWeeks to monthsDays to weeks
CostPort fee + egress (discounted)Provider fee + egress (discounted)
Best forHigh-bandwidth hybrid workloadsSmaller bandwidth needs, remote locations

Cloud VPN

HA VPN provides encrypted tunnels over the public internet with a 99.99% SLA (when configured with the recommended 4-tunnel topology). Each tunnel supports up to 3 Gbps. VPN is the quickest path to hybrid connectivity but is limited by internet bandwidth and latency.

Create HA VPN to on-premises
# Create HA VPN gateway
gcloud compute vpn-gateways create my-ha-vpn \
  --network=prod-vpc \
  --region=us-central1

# Create external VPN gateway (on-prem side)
gcloud compute external-vpn-gateways create on-prem-gw \
  --interfaces=0=203.0.113.1,1=203.0.113.2

# Create Cloud Router with BGP ASN
gcloud compute routers create vpn-router \
  --network=prod-vpc \
  --region=us-central1 \
  --asn=65001

# Create VPN tunnels (4 tunnels for full HA)
for i in 0 1; do
  for j in 0 1; do
    gcloud compute vpn-tunnels create tunnel-${i}-${j} \
      --vpn-gateway=my-ha-vpn \
      --vpn-gateway-interface=${i} \
      --peer-external-gateway=on-prem-gw \
      --peer-external-gateway-interface=${j} \
      --shared-secret=MY_SHARED_SECRET \
      --router=vpn-router \
      --region=us-central1 \
      --ike-version=2
  done
done

Interconnect vs VPN Decision

If your hybrid workload requires more than 3 Gbps of sustained throughput or latency below 10ms to on-premises, you need Dedicated Interconnect. VPN tunnels are capped at 3 Gbps each and add encryption overhead. For most organizations, start with HA VPN and migrate to Interconnect when bandwidth requirements grow beyond what VPN can handle. You can also use both: VPN as a backup for Interconnect failover.

Multi-Cloud Networking Glossary

Cloud NAT Deep Dive

Cloud NAT is a managed, software-defined NAT gateway that provides outbound internet access for VMs without external IPs. Unlike traditional NAT appliances, Cloud NAT is implemented in the network fabric, so there is no VM or appliance to manage, patch, or scale. It provides high availability with no single point of failure.

NAT Configuration and Sizing

Each NAT IP address supports approximately 64,000 concurrent connections (ports). For workloads with high connection counts (like web scrapers or API clients making many concurrent requests), you need to plan your NAT IP and port allocation carefully.

Advanced Cloud NAT configuration
# Create Cloud Router
gcloud compute routers create prod-router \
  --network=prod-vpc \
  --region=us-central1

# Create Cloud NAT with dynamic port allocation
gcloud compute routers nats create prod-nat \
  --router=prod-router \
  --region=us-central1 \
  --auto-allocate-nat-external-ips \
  --nat-all-subnet-ip-ranges \
  --enable-logging \
  --log-filter=ERRORS_ONLY \
  --min-ports-per-vm=2048 \
  --enable-dynamic-port-allocation \
  --max-ports-per-vm=65536 \
  --tcp-established-idle-timeout=1200 \
  --tcp-transitory-idle-timeout=30 \
  --udp-idle-timeout=30

# Use static IPs for allowlisting at third parties
gcloud compute addresses create nat-ip-1 nat-ip-2 nat-ip-3 \
  --region=us-central1

gcloud compute routers nats update prod-nat \
  --router=prod-router \
  --region=us-central1 \
  --nat-external-ip-pool=nat-ip-1,nat-ip-2,nat-ip-3

# Monitor NAT port allocation
gcloud compute routers get-nat-mapping-info prod-router \
  --region=us-central1

NAT Port Exhaustion Is a Common Problem

NAT port exhaustion is one of the most common networking issues in GCP, especially with GKE clusters where pod density is high. Symptoms include intermittent connection timeouts and increased latency. Monitor the compute.googleapis.com/nat/port_usage metric and set alerts at 80% utilization. If you see port exhaustion, enable dynamic port allocation, increase min-ports-per-vm, add more NAT IPs, or reduce idle connection timeouts.

Cloud DNS Advanced Configuration

Cloud DNS is a managed, authoritative DNS service with 100% availability SLA. Beyond basic DNS resolution, it provides several advanced features for enterprise networking:

DNS Features Overview

  • DNS Peering: Forward DNS queries from one VPC to another, enabling cross-project name resolution without exposing records publicly.
  • Private Zones: Create DNS records visible only within specified VPC networks. Essential for internal service discovery.
  • Response Policies: Override DNS responses for specific domains. Useful for blocking malicious domains or redirecting internal services.
  • DNSSEC: Sign your zones to prevent DNS spoofing. Cloud DNS supports both zone signing and validation.
  • DNS Forwarding: Forward queries for specific zones to on-premises DNS servers for hybrid DNS resolution.
Cloud DNS Configuration Guide
Internal DNS for service discovery
# Create a private zone for internal service discovery
gcloud dns managed-zones create internal-zone \
  --dns-name="internal.example.com." \
  --visibility=private \
  --networks=projects/my-project/global/networks/prod-vpc

# Add service records
for svc in api db cache worker queue; do
  gcloud dns record-sets create $svc.internal.example.com. \
    --zone=internal-zone \
    --type=A \
    --ttl=60 \
    --rrdatas="10.10.0.$((RANDOM % 254 + 1))"
done

# Create SRV records for service port discovery
gcloud dns record-sets create _grpc._tcp.api.internal.example.com. \
  --zone=internal-zone \
  --type=SRV \
  --ttl=60 \
  --rrdatas="10 0 8080 api.internal.example.com."

Network Intelligence Center

Network Intelligence Center is a suite of tools for monitoring, verifying, and troubleshooting network configurations. It provides visibility that would otherwise require manual packet capture and analysis.

Connectivity Tests

Connectivity Tests simulate a packet from source to destination and show exactly where it would be dropped (firewall, route, NAT). This is invaluable for debugging connectivity issues without needing SSH access to VMs.

Run connectivity tests
# Test connectivity between two VMs
gcloud network-management connectivity-tests create test-web-to-db \
  --source-instance=projects/my-project/zones/us-central1-a/instances/web-server \
  --destination-instance=projects/my-project/zones/us-central1-a/instances/db-server \
  --destination-port=5432 \
  --protocol=TCP

# Check results
gcloud network-management connectivity-tests describe test-web-to-db \
  --format="yaml(reachabilityDetails)"

# Test external connectivity
gcloud network-management connectivity-tests create test-internet \
  --source-instance=projects/my-project/zones/us-central1-a/instances/web-server \
  --destination-ip-address=8.8.8.8 \
  --destination-port=443 \
  --protocol=TCP

Firewall Insights

Firewall Insights identifies overly permissive firewall rules, shadowed rules (rules that never match because a higher-priority rule already matches), and rules with deny-hits that may indicate an attack.

Query firewall insights
# Find shadowed (unused) firewall rules
gcloud recommender insights list \
  --insight-type=google.compute.firewall.Insight \
  --project=my-project \
  --location=global \
  --filter="category=SHADOWED_RULE" \
  --format="table(content.firewallRuleDetails.displayName, stateInfo.state)"

# Find overly permissive rules
gcloud recommender insights list \
  --insight-type=google.compute.firewall.Insight \
  --project=my-project \
  --location=global \
  --filter="category=OVERLY_PERMISSIVE" \
  --format="table(content.firewallRuleDetails.displayName)"

# Get firewall rule hit counts
gcloud compute firewall-rules list \
  --project=my-project \
  --format="table(name, direction, priority, disabled, hitCount.count)"

Performance Dashboard

The Performance Dashboard monitors packet loss and latency between your GCP resources and across regions. It helps identify network degradation before it impacts users and provides historical data for capacity planning.

Network Topology

Network Topology visualizes your VPC topology, traffic flows, and bandwidth utilization in real time. It shows how traffic flows between regions, zones, subnets, and individual VMs, making it easy to identify unexpected cross-region traffic that is adding cost and latency.

Use Connectivity Tests in CI/CD

Integrate Connectivity Tests into your Terraform CI/CD pipeline. After applying network changes, automatically run connectivity tests for critical paths (web to database, services to Google APIs, VPN to on-premises). If any test fails, the pipeline can alert the team before the change impacts production traffic.

Network Security Best Practices

Network security in GCP involves multiple layers of defense. No single control is sufficient; you need defense in depth with overlapping controls.

Defense in Depth Layers

  1. Organization policies: Block external IPs on VMs, restrict allowed regions, require shielded VMs.
  2. Hierarchical firewall policies: Organization-wide rules that enforce baseline security regardless of project-level configurations.
  3. VPC firewall rules: Network-specific rules targeting service accounts (not tags) for production workloads.
  4. Cloud Armor: DDoS protection and WAF for internet-facing services behind the load balancer.
  5. VPC Service Controls: Data exfiltration prevention for sensitive services.
  6. Private connectivity: Cloud NAT, Private Google Access, and Private Service Connect to minimize internet exposure.
GCP IAM and Organization PoliciesGCP Security Command Center

Multi-Cloud Networking

For organizations with workloads spanning GCP, AWS, and Azure, network connectivity between clouds requires careful design. GCP provides several options for cross-cloud connectivity:

  • HA VPN to AWS/Azure: The simplest option. Create HA VPN connections to AWS VPN Gateway or Azure VPN Gateway. Each tunnel supports up to 3 Gbps.
  • Dedicated/Partner Interconnect: For higher bandwidth cross-cloud connectivity, use Interconnect to a colocation facility that also hosts connections to other clouds.
  • Cross-Cloud Interconnect: GCP offers direct Interconnect connections to AWS and Azure in specific locations, providing the lowest latency cross-cloud connectivity.
  • Network Connectivity Center: Use NCC as a hub for managing all connectivity (VPCs, on-premises, other clouds) from a single control plane.
Multi-Cloud Networking Glossary

Network Audit Checklist

Periodically review your network configuration against this checklist to identify security gaps, performance issues, and cost optimization opportunities.

CategoryCheckFrequency
SecurityReview Firewall Insights for shadowed and overly permissive rulesMonthly
SecurityVerify VPC Flow Logs are enabled on all production subnetsQuarterly
SecurityRun Connectivity Tests for all critical pathsAfter changes
PerformanceCheck Network Topology for unexpected cross-region trafficMonthly
PerformanceMonitor Cloud NAT port utilization and latencyWeekly
CostReview egress costs by destination and serviceMonthly
CostVerify Standard Tier is used for non-production resourcesQuarterly
AvailabilityTest VPN/Interconnect failoverQuarterly

Automate Network Audits

Schedule monthly Cloud Scheduler jobs that trigger Cloud Functions to run connectivity tests, check firewall insights, and export flow log summaries. Send automated reports to the security and network teams. This creates a regular cadence for network hygiene without requiring manual effort. Use the Network Intelligence Center API for programmatic access to all diagnostic data.

GCP Cost Optimization GuideGCP Architecture Framework

Key Takeaways

  1. 1Cloud Interconnect provides dedicated 10/100 Gbps connectivity to on-premises networks.
  2. 2Private Service Connect creates private endpoints for Google APIs and third-party services.
  3. 3Cloud NAT provides outbound internet access without assigning external IPs to VMs.
  4. 4Cloud Armor provides WAF and DDoS protection for external HTTP(S) load balancers.
  5. 5Network Intelligence Center provides monitoring, verification, and troubleshooting tools.
  6. 6GCP premium tier uses Google global backbone; standard tier uses public internet with regional optimization.

Frequently Asked Questions

What is Cloud Interconnect?
Cloud Interconnect provides high-bandwidth, low-latency private connectivity between on-premises and GCP. Dedicated Interconnect offers 10/100 Gbps physical connections. Partner Interconnect offers 50 Mbps to 50 Gbps through service providers for more locations and flexibility.
What is Private Service Connect?
Private Service Connect creates private endpoints in your VPC for Google APIs, Google services, and third-party services. Traffic stays on Google network and never traverses the public internet. It provides more control than Private Google Access.
How does Cloud NAT work?
Cloud NAT provides outbound internet access for VMs without external IPs. It is a software-defined service (no NAT instances to manage), supports automatic IP allocation, and logs all connections. It does not support inbound connections, so use load balancers for that.
What is the difference between premium and standard network tiers?
Premium tier routes traffic through Google global backbone from the edge closest to the user, providing lower latency and higher throughput. Standard tier routes through public internet with regional optimization. Premium is default and recommended; Standard saves ~25% on egress.
How do I protect against DDoS attacks on GCP?
Cloud Armor provides L7 DDoS protection and WAF rules for HTTP(S) load balancers. GCP infrastructure automatically protects against L3/L4 DDoS. Cloud Armor Managed Protection Plus adds adaptive protection, DDoS response support, and cost protection.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.