Cloud CDN & Load Balancing
Configure GCP Cloud CDN and Cloud Load Balancing, including HTTP(S) load balancers, Cloud Armor WAF, caching, SSL/TLS, and backend services.
Prerequisites
- Understanding of HTTP and networking fundamentals
- Familiarity with GCP VPC networking
- Experience with DNS configuration
GCP Content Delivery & Load Balancing
Google Cloud's load balancing and content delivery infrastructure sits on the same global network that powers Google Search, YouTube, and Gmail. This network consists of over 180 points of presence (PoPs) worldwide, connected by a private fiber backbone that avoids the public internet entirely. When you use GCP's load balancers and Cloud CDN, your traffic rides on this premium network, resulting in lower latency, higher throughput, and better reliability than architectures that depend on public internet routing.
The GCP load balancing and CDN stack includes several components that work together:
- Cloud Load Balancing: A fully distributed, software-defined, managed load balancing service. Supports HTTP(S), TCP, SSL, and UDP protocols with global, regional, internal, and external configurations.
- Cloud CDN: A content delivery network that caches HTTP responses at Google's edge locations. Operates as a layer on top of the HTTP(S) Load Balancer.
- Cloud Armor: A WAF (Web Application Firewall) and DDoS protection service integrated with HTTP(S) load balancers. Provides IP allowlisting/denylisting, geographic restrictions, and OWASP Top 10 protection.
- SSL/TLS Certificates: Google-managed or self-managed certificates for HTTPS termination at the load balancer.
| Load Balancer Type | Scope | Protocol | Use Case |
|---|---|---|---|
| Global external HTTP(S) | Global | HTTP/HTTPS/HTTP2 | Web applications, APIs, static sites |
| Global external SSL Proxy | Global | SSL/TLS | Non-HTTP SSL traffic (databases, custom protocols) |
| Global external TCP Proxy | Global | TCP | Non-SSL TCP traffic without content inspection |
| Regional external HTTP(S) | Regional | HTTP/HTTPS | Regional deployments, compliance requirements |
| Regional external network | Regional | TCP/UDP | UDP traffic, non-proxied TCP, IP protocol forwarding |
| Internal HTTP(S) | Regional | HTTP/HTTPS | Internal microservice routing with URL-based routing |
| Internal TCP/UDP | Regional | TCP/UDP | Internal services, database connections |
| Cross-region internal | Global | HTTP/HTTPS/TCP | Multi-region internal service mesh |
GCP Load Balancers Are Not Instances
Unlike traditional load balancers (like AWS ALB or NGINX), GCP load balancers are not discrete instances or appliances. They are a fully distributed, software-defined networking feature built into Google's global infrastructure. This means there is no single point of failure, no capacity to provision, no warm-up period, and no practical limit on concurrent connections. The load balancer scales automatically with your traffic from zero to millions of requests per second without any configuration changes.
Cloud Load Balancing Architecture
The global external HTTP(S) load balancer is the most commonly used configuration and serves as the entry point for most web-facing applications on GCP. It consists of several interrelated resources:
- Forwarding rule: Maps an external IP address and port to a target proxy. This is the "front door" of your load balancer.
- Target proxy: Terminates HTTP(S) connections and references a URL map. For HTTPS, it also holds the SSL certificate.
- URL map: Routes requests to different backend services based on the host header and URL path. Acts as a reverse proxy configuration.
- Backend service: Defines a group of backends (instance groups, NEGs, or Cloud Run services), along with health check configuration, session affinity, and load balancing scheme.
- Health check: Probes backends to determine their readiness to receive traffic. Unhealthy backends are automatically removed from rotation.
- Backend (instance group or NEG): The actual compute resources that serve traffic. Can be managed instance groups, unmanaged instance groups, or network endpoint groups.
Request flow: Client → Anycast IP → Nearest Google PoP → Forwarding Rule → Target Proxy (SSL termination) → URL Map (routing decision) → Backend Service (load balancing) → Health-checked backend instance.
HTTP(S) Load Balancer Configuration
Setting up a global external HTTP(S) load balancer involves creating each component in the architecture chain. While this can be done via the Cloud Console wizard, using gcloud or Terraform provides reproducible, version-controlled configurations.
# Step 1: Create a health check
gcloud compute health-checks create http web-health-check \
--port=80 \
--request-path=/health \
--check-interval=10s \
--timeout=5s \
--healthy-threshold=2 \
--unhealthy-threshold=3
# Step 2: Create a backend service
gcloud compute backend-services create web-backend \
--protocol=HTTP \
--port-name=http \
--health-checks=web-health-check \
--global \
--enable-cdn \
--connection-draining-timeout=30 \
--load-balancing-scheme=EXTERNAL_MANAGED \
--locality-lb-policy=ROUND_ROBIN \
--enable-logging \
--logging-sample-rate=1.0
# Step 3: Add backends to the backend service
# (Using a managed instance group)
gcloud compute backend-services add-backend web-backend \
--instance-group=web-instances \
--instance-group-zone=us-central1-a \
--balancing-mode=UTILIZATION \
--max-utilization=0.8 \
--capacity-scaler=1.0 \
--global
# Add a Cloud Run NEG as a backend
gcloud compute network-endpoint-groups create web-serverless-neg \
--region=us-central1 \
--network-endpoint-type=serverless \
--cloud-run-service=my-web-app
gcloud compute backend-services add-backend web-backend \
--network-endpoint-group=web-serverless-neg \
--network-endpoint-group-region=us-central1 \
--global
# Step 4: Create a URL map
gcloud compute url-maps create web-url-map \
--default-service=web-backend
# Add path-based routing rules
gcloud compute url-maps add-path-matcher web-url-map \
--path-matcher-name=api-matcher \
--default-service=web-backend \
--new-hosts=api.example.com \
--path-rules="/v1/*=api-v1-backend,/v2/*=api-v2-backend"
# Step 5: Reserve a global static IP address
gcloud compute addresses create web-lb-ip \
--ip-version=IPV4 \
--global
# Step 6: Create an SSL certificate (Google-managed)
gcloud compute ssl-certificates create web-cert \
--domains=example.com,www.example.com,api.example.com \
--global
# Step 7: Create the HTTPS target proxy
gcloud compute target-https-proxies create web-https-proxy \
--url-map=web-url-map \
--ssl-certificates=web-cert \
--global
# Step 8: Create the forwarding rule
gcloud compute forwarding-rules create web-https-forwarding \
--address=web-lb-ip \
--target-https-proxy=web-https-proxy \
--ports=443 \
--global \
--load-balancing-scheme=EXTERNAL_MANAGED
# Step 9: Create HTTP-to-HTTPS redirect
gcloud compute url-maps create http-redirect \
--default-url-redirect-https-redirect
gcloud compute target-http-proxies create http-redirect-proxy \
--url-map=http-redirect \
--global
gcloud compute forwarding-rules create http-redirect-forwarding \
--address=web-lb-ip \
--target-http-proxy=http-redirect-proxy \
--ports=80 \
--global \
--load-balancing-scheme=EXTERNAL_MANAGED
# Get the load balancer IP
gcloud compute addresses describe web-lb-ip --global --format='value(address)'Google-Managed SSL Certificate Provisioning
Google-managed SSL certificates require your DNS to be pointing to the load balancer's IP address before the certificate can be provisioned. Certificate provisioning can take up to 60 minutes (sometimes longer for new domains). Check the status with gcloud compute ssl-certificates describe web-cert --global. The domainStatus field shows PROVISIONING while in progress and ACTIVE when ready. If provisioning fails, verify your DNS A/AAAA records and ensure there are no CAA records blocking Google's certificate authority.
Internal & Regional Load Balancers
Not all load balancers need to be internet-facing. GCP provides internal load balancers for routing traffic between services within your VPC, and regional load balancers for traffic that should remain within a specific geographic area. Internal load balancers are essential for microservice architectures where services communicate over private networks.
Internal load balancer types and their use cases:
| Type | Protocol | Key Feature | Use Case |
|---|---|---|---|
| Internal HTTP(S) | HTTP/HTTPS/HTTP2 | URL-based routing, TLS termination | Internal APIs, service mesh ingress |
| Internal TCP/UDP | TCP/UDP | Layer 4 pass-through | Database connections, custom protocols |
| Internal TCP Proxy | TCP | TCP proxy with connection draining | Internal TCP services needing advanced features |
| Cross-region internal | HTTP(S)/TCP | Global internal routing | Multi-region internal service discovery |
# Create an internal health check
gcloud compute health-checks create http internal-api-health \
--port=8080 \
--request-path=/health \
--region=us-central1
# Create an internal backend service
gcloud compute backend-services create internal-api-backend \
--protocol=HTTP \
--health-checks=internal-api-health \
--health-checks-region=us-central1 \
--region=us-central1 \
--load-balancing-scheme=INTERNAL_MANAGED
# Add backend instances
gcloud compute backend-services add-backend internal-api-backend \
--instance-group=api-instances \
--instance-group-zone=us-central1-a \
--region=us-central1
# Create a URL map for internal routing
gcloud compute url-maps create internal-url-map \
--default-service=internal-api-backend \
--region=us-central1
# Create a proxy-only subnet (required for internal HTTP(S) LB)
gcloud compute networks subnets create proxy-only-subnet \
--purpose=REGIONAL_MANAGED_PROXY \
--role=ACTIVE \
--network=my-vpc \
--region=us-central1 \
--range=10.129.0.0/23
# Create the internal target proxy
gcloud compute target-http-proxies create internal-http-proxy \
--url-map=internal-url-map \
--region=us-central1
# Create the internal forwarding rule
gcloud compute forwarding-rules create internal-api-forwarding \
--load-balancing-scheme=INTERNAL_MANAGED \
--network=my-vpc \
--subnet=app-subnet \
--address=10.0.1.100 \
--target-http-proxy=internal-http-proxy \
--target-http-proxy-region=us-central1 \
--ports=80 \
--region=us-central1Cloud CDN Setup & Caching
Cloud CDN caches HTTP(S) responses at Google's globally distributed edge locations, reducing latency for end users and offloading traffic from your backend services. Cloud CDN is not a standalone service: it operates as an optional layer on top of the global external HTTP(S) load balancer. You enable it by setting the --enable-cdn flag on a backend service, and Cloud CDN automatically caches cacheable responses based on HTTP cache headers.
Cloud CDN supports three cache modes that determine what gets cached:
| Cache Mode | Behavior | Best For |
|---|---|---|
CACHE_ALL_STATIC | Automatically caches static content (images, CSS, JS) based on Content-Type | Static sites, mixed content applications |
USE_ORIGIN_HEADERS | Caches only responses with explicit Cache-Control headers | APIs, dynamic content with specific caching rules |
FORCE_CACHE_ALL | Caches all responses regardless of origin headers (use with caution) | Static backends that do not set cache headers |
# Enable Cloud CDN on an existing backend service
gcloud compute backend-services update web-backend \
--enable-cdn \
--global
# Configure CDN cache policy
gcloud compute backend-services update web-backend \
--global \
--cache-mode=CACHE_ALL_STATIC \
--default-ttl=3600 \
--max-ttl=86400 \
--client-ttl=3600 \
--serve-while-stale=86400 \
--negative-caching \
--negative-caching-policy='404=60,405=60'
# Configure custom cache key policy (what makes a unique cache entry)
gcloud compute backend-services update web-backend \
--global \
--no-cache-key-include-protocol \
--cache-key-include-host \
--cache-key-include-query-string \
--cache-key-query-string-whitelist=page,lang,category
# Create a Cloud Storage backend bucket with CDN
gcloud compute backend-buckets create static-assets-backend \
--gcs-bucket-name=my-static-assets-bucket \
--enable-cdn \
--default-ttl=86400 \
--max-ttl=604800
# Add the bucket backend to the URL map
gcloud compute url-maps add-path-matcher web-url-map \
--path-matcher-name=static-matcher \
--default-service=web-backend \
--new-hosts=static.example.com \
--backend-bucket-path-rules="/assets/*=static-assets-backend,/images/*=static-assets-backend"
# Invalidate cached content
gcloud compute url-maps invalidate-cdn-cache web-url-map \
--path="/assets/*" \
--global \
--async
# View CDN cache hit/miss metrics
gcloud monitoring metrics list \
--filter='metric.type = starts_with("loadbalancing.googleapis.com/https/backend_request_count")'Maximize Cache Hit Ratio
To maximize your Cloud CDN cache hit ratio: set explicit Cache-Control headers on your origin responses (e.g., Cache-Control: public, max-age=3600), use consistent URLs (avoid random query parameters), configure cache key policies to exclude unnecessary query parameters, and use versioned filenames for static assets (e.g., app.v2.1.0.js) instead of relying on cache invalidation. A well-configured CDN typically achieves 85-95% cache hit ratios for static assets, dramatically reducing backend load and response latency.
Cloud Armor WAF Policies
Cloud Armor is GCP's edge security service that provides DDoS protection, IP-based access control, geographic restrictions, and WAF (Web Application Firewall) rules. Cloud Armor policies are attached to backend services of the global external HTTP(S) load balancer, meaning traffic is filtered at Google's edge PoPs before reaching your backend infrastructure. This is critical for protecting your applications from volumetric attacks, OWASP Top 10 vulnerabilities, and abuse from known-bad IP ranges.
# Create a security policy
gcloud compute security-policies create web-security-policy \
--description="Production web application WAF policy"
# Allow traffic from specific countries only
gcloud compute security-policies rules create 1000 \
--security-policy=web-security-policy \
--expression="origin.region_code == 'US' || origin.region_code == 'CA' || origin.region_code == 'GB'" \
--action=allow \
--description="Allow US, Canada, and UK traffic"
# Block known-bad IP ranges
gcloud compute security-policies rules create 2000 \
--security-policy=web-security-policy \
--src-ip-ranges="198.51.100.0/24,203.0.113.0/24" \
--action=deny-403 \
--description="Block known malicious IP ranges"
# Enable OWASP Top 10 protection (pre-configured WAF rules)
gcloud compute security-policies rules create 3000 \
--security-policy=web-security-policy \
--expression="evaluatePreconfiguredExpr('xss-v33-stable')" \
--action=deny-403 \
--description="Block XSS attacks"
gcloud compute security-policies rules create 3001 \
--security-policy=web-security-policy \
--expression="evaluatePreconfiguredExpr('sqli-v33-stable')" \
--action=deny-403 \
--description="Block SQL injection attacks"
gcloud compute security-policies rules create 3002 \
--security-policy=web-security-policy \
--expression="evaluatePreconfiguredExpr('rce-v33-stable')" \
--action=deny-403 \
--description="Block remote code execution attacks"
gcloud compute security-policies rules create 3003 \
--security-policy=web-security-policy \
--expression="evaluatePreconfiguredExpr('lfi-v33-stable')" \
--action=deny-403 \
--description="Block local file inclusion attacks"
# Rate limiting rule
gcloud compute security-policies rules create 4000 \
--security-policy=web-security-policy \
--expression="true" \
--action=rate-based-ban \
--rate-limit-threshold-count=100 \
--rate-limit-threshold-interval-sec=60 \
--ban-duration-sec=300 \
--conform-action=allow \
--exceed-action=deny-429 \
--enforce-on-key=IP \
--description="Rate limit: 100 req/min per IP, ban for 5 min"
# Set default action to deny (allowlist approach)
gcloud compute security-policies rules update 2147483647 \
--security-policy=web-security-policy \
--action=allow
# Attach the policy to a backend service
gcloud compute backend-services update web-backend \
--security-policy=web-security-policy \
--globalTest WAF Rules in Preview Mode First
Before enforcing Cloud Armor WAF rules, deploy them in preview mode by adding --preview to the rule creation command. In preview mode, matching requests are logged but not blocked, allowing you to analyze false positives before enabling enforcement. Review the logs in Cloud Logging with the filter resource.type="http_load_balancer" AND jsonPayload.enforcedSecurityPolicy.outcome="DENY" to see what would have been blocked. Prematurely enforcing WAF rules can block legitimate users and cause production outages.
SSL/TLS & Custom Domains
GCP load balancers terminate TLS connections at the edge, decrypting traffic before forwarding it to backend services over Google's internal network. This means your backend services only need to handle unencrypted HTTP traffic, simplifying certificate management and improving performance (TLS handshakes happen at the nearest Google PoP, not at your backend).
GCP supports three types of SSL certificates:
| Certificate Type | Management | Renewal | Use Case |
|---|---|---|---|
| Google-managed | Fully automatic provisioning and renewal | Auto (before expiry) | Most web applications (recommended) |
| Self-managed | You upload cert and private key | Manual | Custom CAs, EV certificates, wildcard certs |
| Certificate Manager | Google-managed with DNS authorization | Auto | Many domains, wildcard certs, complex setups |
# Create a Google-managed certificate (simplest approach)
gcloud compute ssl-certificates create my-cert \
--domains=example.com,www.example.com \
--global
# Create a Certificate Manager certificate with DNS authorization
# (supports wildcards and does not require the LB to be set up first)
gcloud certificate-manager dns-authorizations create example-auth \
--domain=example.com
# Get the DNS record to add to your DNS zone
gcloud certificate-manager dns-authorizations describe example-auth \
--format='value(dnsResourceRecord.name,dnsResourceRecord.type,dnsResourceRecord.data)'
# Create the certificate (after adding the DNS record)
gcloud certificate-manager certificates create wildcard-cert \
--domains="example.com,*.example.com" \
--dns-authorizations=example-auth
# Create a certificate map and entry
gcloud certificate-manager maps create my-cert-map
gcloud certificate-manager maps entries create main-entry \
--map=my-cert-map \
--certificates=wildcard-cert \
--hostname=example.com
gcloud certificate-manager maps entries create wildcard-entry \
--map=my-cert-map \
--certificates=wildcard-cert \
--hostname="*.example.com"
# Attach the certificate map to the HTTPS proxy
gcloud compute target-https-proxies update web-https-proxy \
--certificate-map=my-cert-map \
--global
# Upload a self-managed certificate
gcloud compute ssl-certificates create custom-cert \
--certificate=path/to/cert.pem \
--private-key=path/to/key.pem \
--globalHealth Checks & Backend Services
Health checks are the mechanism by which GCP load balancers determine whether a backend instance is healthy and able to receive traffic. Unhealthy backends are automatically removed from the load balancing rotation, and traffic is redistributed to remaining healthy backends. When a backend becomes healthy again, it is automatically added back. Health checks are critical for maintaining high availability. A misconfigured health check can either route traffic to broken backends (check too lenient) or remove healthy backends unnecessarily (check too strict).
# Terraform configuration for health checks and backend services
# HTTP health check for web applications
resource "google_compute_health_check" "web_health" {
name = "web-health-check"
check_interval_sec = 10
timeout_sec = 5
healthy_threshold = 2
unhealthy_threshold = 3
http_health_check {
port = 80
request_path = "/health"
# Only consider 200 as healthy
response = ""
}
log_config {
enable = true
}
}
# HTTPS health check for TLS backends
resource "google_compute_health_check" "api_health" {
name = "api-health-check"
check_interval_sec = 15
timeout_sec = 10
healthy_threshold = 2
unhealthy_threshold = 5
https_health_check {
port = 443
request_path = "/api/health"
}
}
# gRPC health check
resource "google_compute_health_check" "grpc_health" {
name = "grpc-health-check"
check_interval_sec = 10
timeout_sec = 5
healthy_threshold = 2
unhealthy_threshold = 3
grpc_health_check {
port = 50051
# Uses the standard gRPC health checking protocol
}
}
# Backend service with advanced configuration
resource "google_compute_backend_service" "web" {
name = "web-backend"
protocol = "HTTP"
port_name = "http"
load_balancing_scheme = "EXTERNAL_MANAGED"
timeout_sec = 30
health_checks = [google_compute_health_check.web_health.id]
# Enable Cloud CDN
enable_cdn = true
cdn_policy {
cache_mode = "CACHE_ALL_STATIC"
default_ttl = 3600
max_ttl = 86400
serve_while_stale = 86400
signed_url_cache_max_age_sec = 7200
}
# Connection draining
connection_draining_timeout_sec = 30
# Session affinity (for stateful applications)
session_affinity = "GENERATED_COOKIE"
affinity_cookie_ttl_sec = 3600
# Circuit breaker settings
circuit_breakers {
max_connections = 1000
max_pending_requests = 500
max_requests = 2000
max_requests_per_connection = 100
max_retries = 3
}
# Outlier detection (automatic ejection of failing backends)
outlier_detection {
consecutive_errors = 5
interval { seconds = 10 }
base_ejection_time { seconds = 30 }
max_ejection_percent = 50
enforcing_consecutive_errors = 100
enforcing_success_rate = 0
success_rate_minimum_hosts = 3
success_rate_request_volume = 100
success_rate_stdev_factor = 1900
}
# Logging
log_config {
enable = true
sample_rate = 1.0
}
# Cloud Armor security policy
security_policy = google_compute_security_policy.waf.id
backend {
group = google_compute_instance_group_manager.web.instance_group
balancing_mode = "UTILIZATION"
max_utilization = 0.8
capacity_scaler = 1.0
}
}Network Endpoint Groups (NEGs)
Network Endpoint Groups (NEGs) are collections of network endpoints that represent backend services for the load balancer. NEGs provide more granular control over load balancing compared to instance groups, supporting container-native load balancing (directing traffic to individual pods in GKE), serverless backends (Cloud Run, Cloud Functions, App Engine), and internet endpoints (external backends outside GCP).
| NEG Type | Endpoint Type | Use Case |
|---|---|---|
| Zonal NEG | IP:port endpoints within a zone | Container-native LB for GKE pods |
| Internet NEG | External IP:port or FQDN | Proxy to on-premises or other clouds |
| Serverless NEG | Cloud Run, Cloud Functions, App Engine | Serverless backend routing |
| Hybrid connectivity NEG | IP:port on-premises endpoints | Hybrid cloud load balancing via VPN/Interconnect |
| Private Service Connect NEG | Published services via PSC | Load balancing to PSC service attachments |
# Serverless NEG for Cloud Run
gcloud compute network-endpoint-groups create api-serverless-neg \
--region=us-central1 \
--network-endpoint-type=serverless \
--cloud-run-service=my-api-service
# Serverless NEG for Cloud Functions
gcloud compute network-endpoint-groups create func-serverless-neg \
--region=us-central1 \
--network-endpoint-type=serverless \
--cloud-function-name=my-function
# Internet NEG for external backends
gcloud compute network-endpoint-groups create external-neg \
--network-endpoint-type=internet-fqdn-port \
--global
gcloud compute network-endpoint-groups update external-neg \
--global \
--add-endpoint="fqdn=legacy-api.example.com,port=443"
# Add NEGs to backend services
gcloud compute backend-services add-backend api-backend \
--global \
--network-endpoint-group=api-serverless-neg \
--network-endpoint-group-region=us-central1
# Container-native load balancing (GKE)
# In your GKE Service manifest, use NEG annotation:
# metadata:
# annotations:
# cloud.google.com/neg: '{"ingress": true}'Cost Optimization & Best Practices
Load balancing and CDN costs on GCP are primarily driven by data processing (per GB of traffic through the load balancer), forwarding rules (per rule per hour), and Cloud CDN cache fill/egress charges. Understanding these cost components helps you design cost-effective architectures.
- Minimize forwarding rules: Each global forwarding rule costs approximately $18/month. Use URL maps to route multiple services through a single load balancer rather than creating separate load balancers per service.
- Maximize CDN cache hit ratio: Higher cache hit ratios mean less traffic reaches your backends, reducing both backend costs and data processing charges. Target 85%+ for static content.
- Use Standard Tier networking: For latency-tolerant workloads, Standard Tier networking uses regular internet routing instead of Google's premium network, reducing egress costs by up to 40%. However, this sacrifices the premium network's lower latency and higher reliability.
- Right-size health checks: Overly frequent health checks generate unnecessary traffic. For most applications, a 10-15 second check interval with a 5-second timeout is sufficient.
- Use connection draining: Configure connection draining (30 seconds is typical) to gracefully drain existing connections during backend removal, preventing dropped requests during deployments.
Premium vs Standard Network Tier
GCP offers two network tiers. Premium Tier (default) routes traffic through Google's private backbone, entering the Google network at the PoP closest to the user. This delivers lower latency and higher reliability. Standard Tier routes traffic through the public internet, entering the Google network at the region where your resources are deployed. Standard Tier is 25-40% cheaper for egress but does not support global load balancing (only regional), Cloud CDN, or Cloud Armor. For production web applications, Premium Tier is almost always worth the additional cost.
Key Takeaways
- 1GCP Cloud Load Balancing provides global, anycast-based load balancing across all regions.
- 2Cloud CDN caches HTTP(S) content at Google edge locations for low-latency delivery.
- 3Cloud Armor provides WAF, DDoS protection, and adaptive protection at the edge.
- 4Network Endpoint Groups (NEGs) enable load balancing to serverless, hybrid, and internet endpoints.
- 5Health checks automatically route traffic away from unhealthy backends.
- 6SSL certificates can be Google-managed for automatic provisioning and renewal.
Frequently Asked Questions
How does GCP load balancing differ from AWS and Azure?
What is Cloud Armor?
How much does Cloud CDN cost?
What are Network Endpoint Groups?
Can I use Cloud Load Balancing without Cloud CDN?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.