CI/CD with Cloud Build
Build CI/CD pipelines with Google Cloud Build, covering build configs, triggers, Artifact Registry, Cloud Deploy, binary authorization, and worker pools.
Prerequisites
- Basic understanding of CI/CD concepts
- Familiarity with Docker and container images
- Experience with Git and gcloud CLI
- Understanding of Terraform or similar IaC tools
CI/CD on GCP Overview
Continuous Integration and Continuous Delivery (CI/CD) on Google Cloud is powered by a trio of services: Cloud Build for building and testing, Artifact Registry for storing build artifacts, and Cloud Deploy for progressive delivery to target environments. Together, these services provide a fully managed CI/CD pipeline that eliminates the need to operate Jenkins, GitLab CI, or other self-hosted build infrastructure. Every build runs in an isolated container on Google-managed infrastructure, scaling automatically from zero to thousands of concurrent builds.
Cloud Build is the engine at the center of this ecosystem. It executes build steps defined in a cloudbuild.yaml configuration file, where each step runs in a Docker container with access to a shared workspace volume. Steps can run any Docker image, including compilers, linters, test frameworks, deployment tools, and custom scripts, giving you complete flexibility over your build pipeline. Cloud Build integrates with GitHub, GitLab, Bitbucket, and Cloud Source Repositories for source-triggered builds.
The GCP CI/CD landscape compared to alternatives:
| Feature | Cloud Build | GitHub Actions | Jenkins | GitLab CI |
|---|---|---|---|---|
| Infrastructure | Fully managed | Managed + self-hosted | Self-hosted | Managed + self-hosted |
| GCP integration | Native (IAM, Artifact Registry, Deploy) | Via OIDC federation | Via plugins | Via integration |
| Pricing model | Per build-minute | Per minute (free tier) | Infrastructure cost | Per minute (free tier) |
| Max build duration | 24 hours | 6 hours (72 for self-hosted) | Unlimited | Configurable |
| Container builds | Native Docker + Buildpacks | Docker actions | Docker plugin | Docker executor |
| Private networking | Private worker pools in VPC | Self-hosted runners | Direct VPC access | Self-hosted runners |
Free Tier for Cloud Build
Cloud Build provides 120 free build-minutes per day using the default e2-medium machine type (1 vCPU, 4 GB RAM). This is sufficient for most small-to-medium projects. Builds using larger machine types (e2-highcpu-8, e2-highcpu-32) or private worker pools are billed at higher rates. Artifact Registry provides 500 MB of free storage and 1 GB of free egress per month.
Cloud Build Architecture
Cloud Build executes builds in ephemeral containers on Google-managed infrastructure. When a build is triggered, Cloud Build provisions a build environment with a shared /workspace volume, executes each step sequentially (or in parallel where specified), and tears down the environment after the build completes. The /workspace directory persists across all steps in a build, serving as the primary mechanism for passing data between steps.
The build lifecycle follows these stages:
- Source fetch: Cloud Build downloads your source code from the configured repository (GitHub, Cloud Source Repositories, or a Cloud Storage archive). This code is placed in
/workspace. - Step execution: Each build step runs as a Docker container with
/workspacemounted as a volume. Steps share this volume but have isolated filesystems otherwise. - Artifact upload: After all steps complete, Cloud Build uploads specified artifacts to Artifact Registry, Cloud Storage, or both.
- Cleanup: The build environment is destroyed. Logs are persisted in Cloud Logging and the build metadata is queryable through the Cloud Build API.
Machine Types
Cloud Build offers several machine types that affect build speed and cost:
| Machine Type | vCPUs | Memory | Disk | Price/min (USD) |
|---|---|---|---|---|
e2-medium (default) | 1 | 4 GB | 100 GB SSD | $0.003 |
e2-highcpu-8 | 8 | 8 GB | 100 GB SSD | $0.016 |
e2-highcpu-32 | 32 | 32 GB | 100 GB SSD | $0.064 |
| Private pool (custom) | Configurable | Configurable | Configurable | Varies |
Build Configuration & Steps
The cloudbuild.yaml file defines your build pipeline as a sequence of steps. Each step specifies a Docker image to run (the name field), the command to execute (the args or entrypoint field), environment variables, secrets, and optional configuration like timeout and wait conditions. Google provides pre-built builder images for common tools (Docker, gcloud, kubectl, npm, Go, Maven, Gradle), and you can use any public or private Docker image as a build step.
# cloudbuild.yaml
timeout: 1800s # 30-minute max build time
options:
machineType: E2_HIGHCPU_8
logging: CLOUD_LOGGING_ONLY
env:
- 'NODE_ENV=test'
- 'CI=true'
substitutions:
_REGION: us-central1
_SERVICE_NAME: my-api
_ARTIFACT_REPO: docker-images
# Available substitutions (built-in):
# $PROJECT_ID, $BUILD_ID, $COMMIT_SHA, $SHORT_SHA,
# $BRANCH_NAME, $TAG_NAME, $REPO_NAME, $REVISION_ID
steps:
# Step 1: Install dependencies
- id: 'install'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['ci', '--prefer-offline']
# Step 2: Run linting (parallel with step 3)
- id: 'lint'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'lint']
waitFor: ['install']
# Step 3: Run unit tests (parallel with step 2)
- id: 'test'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'test:ci']
waitFor: ['install']
env:
- 'DATABASE_URL=postgresql://test:test@localhost:5432/testdb'
# Step 4: Run security scan
- id: 'security-scan'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['audit', '--audit-level=high']
waitFor: ['install']
allowFailure: true # Don't fail the build on audit warnings
# Step 5: Build the Docker image
- id: 'docker-build'
name: 'gcr.io/cloud-builders/docker'
args:
- 'build'
- '-t'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:$COMMIT_SHA'
- '-t'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:latest'
- '--cache-from'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:latest'
- '-f'
- 'Dockerfile'
- '.'
waitFor: ['lint', 'test']
# Step 6: Push to Artifact Registry
- id: 'docker-push'
name: 'gcr.io/cloud-builders/docker'
args:
- 'push'
- '--all-tags'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}'
waitFor: ['docker-build']
# Step 7: Deploy to Cloud Run (staging)
- id: 'deploy-staging'
name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '${_SERVICE_NAME}-staging'
- '--image'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:$COMMIT_SHA'
- '--region'
- '${_REGION}'
- '--platform'
- 'managed'
- '--no-traffic' # Deploy without routing traffic
- '--tag'
- 'canary'
waitFor: ['docker-push']
# Step 8: Run integration tests against staging
- id: 'integration-test'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'test:integration']
env:
- 'API_URL=https://canary---${_SERVICE_NAME}-staging-abc123.run.app'
waitFor: ['deploy-staging']
# Step 9: Promote to production traffic
- id: 'promote-staging'
name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'services'
- 'update-traffic'
- '${_SERVICE_NAME}-staging'
- '--region'
- '${_REGION}'
- '--to-latest'
waitFor: ['integration-test']
# Store build artifacts
artifacts:
objects:
location: 'gs://${PROJECT_ID}-build-artifacts/$BUILD_ID'
paths:
- 'coverage/lcov.info'
- 'test-results/*.xml'
# Container images to push
images:
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:$COMMIT_SHA'
- '${_REGION}-docker.pkg.dev/$PROJECT_ID/${_ARTIFACT_REPO}/${_SERVICE_NAME}:latest'Accessing Secrets in Build Steps
Cloud Build integrates with Secret Manager to securely inject secrets into build steps without exposing them in your build configuration or logs. Secrets can be mounted as environment variables or files. The Cloud Build service account must have roles/secretmanager.secretAccessor on the secrets it needs to access.
steps:
- id: 'deploy-with-secrets'
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Deploying with database connection..."
gcloud run deploy my-api \
--image=us-central1-docker.pkg.dev/$PROJECT_ID/images/my-api:$COMMIT_SHA \
--region=us-central1 \
--set-secrets="DB_PASSWORD=db-password:latest,API_KEY=api-key:latest"
secretEnv: ['NPM_TOKEN', 'SONAR_TOKEN']
- id: 'npm-publish'
name: 'node:20-slim'
entrypoint: 'bash'
args:
- '-c'
- |
echo "//registry.npmjs.org/:_authToken=$$NPM_TOKEN" > .npmrc
npm publish
secretEnv: ['NPM_TOKEN']
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/npm-token/versions/latest
env: 'NPM_TOKEN'
- versionName: projects/$PROJECT_ID/secrets/sonar-token/versions/latest
env: 'SONAR_TOKEN'Never Log Secret Values
Cloud Build automatically redacts secret values from build logs when you use the secretEnv mechanism. However, if you inadvertently echo a secret value or include it in a command that produces output, it may appear in logs. Always use set +x before handling secrets in bash scripts and avoid piping secret values through commands that produce output. Additionally, set logging: CLOUD_LOGGING_ONLY in your build options to prevent logs from being written to Cloud Storage, where they might be accessible to a broader audience.
Build Triggers & Automation
Build triggers automatically start builds in response to source repository events. You can trigger builds on pushes to specific branches, pull request creation or updates, tag creation, or manual invocation via the API or console. Each trigger is associated with a source repository and a build configuration file, with optional substitution variables that customize the build based on the trigger context.
# Connect a GitHub repository (one-time setup)
# This is done through the Cloud Console UI or with:
gcloud builds repositories create my-repo \
--remote-uri=https://github.com/myorg/my-app.git \
--connection=my-github-connection \
--region=us-central1
# Create a trigger for pushes to the main branch
gcloud builds triggers create github \
--name=deploy-production \
--repository=projects/my-project/locations/us-central1/connections/my-github-connection/repositories/my-app \
--branch-pattern='^main$' \
--build-config=cloudbuild.yaml \
--substitutions=_ENVIRONMENT=production,_REGION=us-central1 \
--description="Deploy to production on push to main" \
--include-logs-with-status
# Create a trigger for pull requests (CI only, no deploy)
gcloud builds triggers create github \
--name=pr-checks \
--repository=projects/my-project/locations/us-central1/connections/my-github-connection/repositories/my-app \
--pull-request-pattern='^main$' \
--build-config=cloudbuild-pr.yaml \
--comment-control=COMMENTS_ENABLED_FOR_EXTERNAL_CONTRIBUTORS_ONLY \
--description="Run CI checks on pull requests"
# Create a trigger for semantic version tags
gcloud builds triggers create github \
--name=release-build \
--repository=projects/my-project/locations/us-central1/connections/my-github-connection/repositories/my-app \
--tag-pattern='^v[0-9]+\.[0-9]+\.[0-9]+$' \
--build-config=cloudbuild-release.yaml \
--description="Build and publish release artifacts on version tags"
# Create a manual trigger with required substitutions
gcloud builds triggers create manual \
--name=rollback \
--build-config=cloudbuild-rollback.yaml \
--substitutions=_TARGET_REVISION=,_SERVICE=,_REGION=us-central1 \
--description="Manual rollback trigger"
# Run a trigger manually
gcloud builds triggers run deploy-production \
--branch=main \
--region=us-central1Artifact Registry Integration
Artifact Registry is GCP's universal package manager, supporting Docker images, Maven/Gradle packages, npm packages, Python packages, Go modules, and generic artifacts. It replaces the legacy Container Registry (gcr.io) with a more feature-rich, IAM-integrated, and regionally distributed artifact management solution. Every Cloud Build pipeline should use Artifact Registry as the destination for build outputs.
# Create a Docker repository
gcloud artifacts repositories create docker-images \
--repository-format=docker \
--location=us-central1 \
--description="Production Docker images" \
--labels=team=platform
# Create an npm repository
gcloud artifacts repositories create npm-packages \
--repository-format=npm \
--location=us-central1 \
--description="Internal npm packages"
# Create a Python repository
gcloud artifacts repositories create python-packages \
--repository-format=python \
--location=us-central1 \
--description="Internal Python packages"
# Configure Docker authentication
gcloud auth configure-docker us-central1-docker.pkg.dev
# Tag and push a Docker image
docker tag my-api:latest us-central1-docker.pkg.dev/my-project/docker-images/my-api:v1.2.3
docker push us-central1-docker.pkg.dev/my-project/docker-images/my-api:v1.2.3
# Configure npm to use Artifact Registry
gcloud artifacts print-settings npm \
--repository=npm-packages \
--location=us-central1 \
--scope=@myorg
# Set up cleanup policies to manage storage costs
gcloud artifacts repositories set-cleanup-policies docker-images \
--location=us-central1 \
--policy=cleanup-policy.json
# List images in a repository
gcloud artifacts docker images list \
us-central1-docker.pkg.dev/my-project/docker-images
# Scan an image for vulnerabilities
gcloud artifacts docker images scan \
us-central1-docker.pkg.dev/my-project/docker-images/my-api:v1.2.3 \
--remote[
{
"name": "delete-old-untagged",
"action": {
"type": "Delete"
},
"condition": {
"tagState": "untagged",
"olderThan": "7d"
}
},
{
"name": "keep-minimum-versions",
"action": {
"type": "Keep"
},
"mostRecentVersions": {
"keepCount": 10
}
},
{
"name": "delete-old-tagged",
"action": {
"type": "Delete"
},
"condition": {
"tagState": "tagged",
"tagPrefixes": ["dev-", "test-", "pr-"],
"olderThan": "30d"
}
}
]Cloud Deploy for CD
Cloud Deploy is a managed continuous delivery service that handles the progression of releases through a series of target environments (e.g., dev → staging → production). It provides approval workflows, canary deployments, automated rollbacks, and deploy verification. Cloud Deploy integrates with GKE, Cloud Run, and Anthos, supporting both Kubernetes manifest-based and Cloud Run service-based deployments.
The Cloud Deploy pipeline model consists of:
- Delivery pipeline: Defines the progression of targets (environments) that a release moves through.
- Targets: Represent deployment environments (GKE clusters, Cloud Run services) with their specific configurations.
- Releases: Immutable snapshots of your application at a point in time, created from a build output.
- Rollouts: The act of deploying a release to a specific target, with optional canary or blue/green strategies.
# clouddeploy.yaml - Delivery pipeline for Cloud Run
apiVersion: deploy.cloud.google.com/v1
kind: DeliveryPipeline
metadata:
name: my-api-pipeline
description: Progressive delivery pipeline for my-api
serialPipeline:
stages:
- targetId: dev
profiles: [dev]
- targetId: staging
profiles: [staging]
strategy:
canary:
runtimeConfig:
cloudRun:
automaticTrafficControl: true
canaryDeployment:
percentages: [25, 50, 75]
verify: true
- targetId: production
profiles: [production]
strategy:
canary:
runtimeConfig:
cloudRun:
automaticTrafficControl: true
canaryDeployment:
percentages: [10, 25, 50]
verify: true
deployParameters:
- values:
minInstances: "2"
maxInstances: "100"
---
apiVersion: deploy.cloud.google.com/v1
kind: Target
metadata:
name: dev
description: Development environment
run:
location: projects/my-project/locations/us-central1
---
apiVersion: deploy.cloud.google.com/v1
kind: Target
metadata:
name: staging
description: Staging environment
run:
location: projects/my-project/locations/us-central1
requireApproval: false
---
apiVersion: deploy.cloud.google.com/v1
kind: Target
metadata:
name: production
description: Production environment
run:
location: projects/my-project/locations/us-central1
requireApproval: true# Register the delivery pipeline and targets
gcloud deploy apply --file=clouddeploy.yaml --region=us-central1
# Create a release (typically done by Cloud Build after a successful build)
gcloud deploy releases create release-v1-2-3 \
--delivery-pipeline=my-api-pipeline \
--region=us-central1 \
--images=my-api=us-central1-docker.pkg.dev/my-project/docker-images/my-api:v1.2.3 \
--source=.
# Check release status
gcloud deploy releases describe release-v1-2-3 \
--delivery-pipeline=my-api-pipeline \
--region=us-central1
# Approve a rollout to production (when requireApproval is true)
gcloud deploy rollouts approve release-v1-2-3-to-production-0001 \
--delivery-pipeline=my-api-pipeline \
--release=release-v1-2-3 \
--region=us-central1
# Roll back to a previous release
gcloud deploy targets rollback production \
--delivery-pipeline=my-api-pipeline \
--region=us-central1
# List rollouts for a release
gcloud deploy rollouts list \
--delivery-pipeline=my-api-pipeline \
--release=release-v1-2-3 \
--region=us-central1Binary Authorization
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed to your GKE clusters or Cloud Run services. It works by requiring that container images have been signed by trusted authorities (attestors) before they can be deployed. This prevents unauthorized images, whether from supply chain attacks, accidental deployments of unvetted code, or policy violations, from reaching production.
The Binary Authorization workflow integrates with Cloud Build:
- Build: Cloud Build compiles, tests, and creates a container image.
- Scan: Artifact Analysis (formerly Container Analysis) scans the image for known vulnerabilities.
- Attest: If the scan passes, an attestor signs the image digest using a Cloud KMS key, creating an attestation stored in Artifact Analysis.
- Deploy: When the image is deployed, Binary Authorization verifies that all required attestations exist before allowing the deployment to proceed.
# Enable Binary Authorization API
gcloud services enable binaryauthorization.googleapis.com
# Create a KMS key for signing attestations
gcloud kms keyrings create build-attestors --location=global
gcloud kms keys create vulnscan-signer \
--location=global \
--keyring=build-attestors \
--purpose=asymmetric-signing \
--default-algorithm=ec-sign-p256-sha256
# Create an attestor note (stored in Artifact Analysis)
curl -X POST \
"https://containeranalysis.googleapis.com/v1/projects/my-project/notes/?noteId=vulnscan-note" \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
-d '{
"attestation": {
"hint": {
"humanReadableName": "Vulnerability Scan Attestor"
}
}
}'
# Create the attestor
gcloud container binauthz attestors create vulnscan-attestor \
--attestation-authority-note=vulnscan-note \
--attestation-authority-note-project=my-project
# Add the KMS key to the attestor
gcloud container binauthz attestors public-keys add \
--attestor=vulnscan-attestor \
--keyversion-project=my-project \
--keyversion-location=global \
--keyversion-keyring=build-attestors \
--keyversion-key=vulnscan-signer \
--keyversion=1
# Create an attestation for an image (done in Cloud Build after scan)
gcloud container binauthz attestations sign-and-create \
--artifact-url="us-central1-docker.pkg.dev/my-project/docker-images/my-api@sha256:abc123..." \
--attestor=vulnscan-attestor \
--attestor-project=my-project \
--keyversion-project=my-project \
--keyversion-location=global \
--keyversion-keyring=build-attestors \
--keyversion-key=vulnscan-signer \
--keyversion=1
# Update the Binary Authorization policy
gcloud container binauthz policy export > policy.yaml# Binary Authorization policy
admissionWhitelistPatterns:
# Allow Google-provided system images
- namePattern: 'gcr.io/google_containers/*'
- namePattern: 'gcr.io/google-containers/*'
- namePattern: 'gke.gcr.io/*'
- namePattern: 'registry.k8s.io/*'
defaultAdmissionRule:
evaluationMode: REQUIRE_ATTESTATION
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
requireAttestationsBy:
- projects/my-project/attestors/vulnscan-attestor
clusterAdmissionRules:
# Dev cluster allows all images (for rapid iteration)
us-central1-a.dev-cluster:
evaluationMode: ALWAYS_ALLOW
enforcementMode: DRYRUN_AUDIT_LOG_ONLY
# Production cluster requires attestation
us-central1-a.prod-cluster:
evaluationMode: REQUIRE_ATTESTATION
enforcementMode: ENFORCED_BLOCK_AND_AUDIT_LOG
requireAttestationsBy:
- projects/my-project/attestors/vulnscan-attestorIntegrate Binary Authorization with Cloud Build
Add attestation creation as a build step in your cloudbuild.yaml after vulnerability scanning passes. Use the gcr.io/cloud-builders/gcloud builder with the container binauthz attestations sign-and-create command. Always attest the image by its digest (sha256), not by tag, because tags are mutable; attesting a tag could allow a different image to be deployed under the same tag later.
Private Worker Pools
Private worker pools let you run Cloud Build on dedicated machines within your VPC, enabling builds that need access to private resources such as internal package registries, databases, or APIs that are not exposed to the internet. Private pools are also required for builds that need custom machine configurations beyond what the default pool offers, or for compliance requirements that mandate builds run on dedicated infrastructure.
Private worker pools support VPC peering, which gives build workers private IP addresses within your VPC. This means build steps can connect to Cloud SQL instances, Memorystore caches, and internal services without traversing the public internet. Worker pools can also be configured with custom machine types and disk sizes.
# Create a private worker pool with VPC peering
gcloud builds worker-pools create my-private-pool \
--region=us-central1 \
--peered-network=projects/my-project/global/networks/build-vpc \
--peered-network-ip-range=/29 \
--worker-machine-type=e2-standard-8 \
--worker-disk-size=200 \
--no-public-egress # Restrict to VPC-only networking
# Use the private pool in cloudbuild.yaml
# options:
# pool:
# name: 'projects/my-project/locations/us-central1/workerPools/my-private-pool'
# Update a worker pool
gcloud builds worker-pools update my-private-pool \
--region=us-central1 \
--worker-machine-type=e2-standard-16
# List worker pools
gcloud builds worker-pools list --region=us-central1
# Delete a worker pool
gcloud builds worker-pools delete my-private-pool --region=us-central1Testing & Quality Gates
A robust CI/CD pipeline includes multiple quality gates that prevent bad code from reaching production. Cloud Build supports arbitrary test steps, and you can combine them with conditional logic and step dependencies to create sophisticated quality pipelines.
- Unit tests: Run in parallel with linting to minimize pipeline duration. Use the
waitForfield to run them after dependency installation but concurrent with other checks. - Integration tests: Run against a staging deployment to verify end-to-end behavior. Use Cloud Build's Docker Compose support for local integration testing.
- Vulnerability scanning: Use Artifact Analysis to scan container images for known CVEs. Gate deployments on scan results using Binary Authorization.
- Code coverage: Upload coverage reports to Cloud Storage and fail builds that fall below minimum thresholds.
- Performance tests: Run load tests against canary deployments using Cloud Deploy verification.
# cloudbuild-pr.yaml - Lightweight checks for pull requests
timeout: 900s
options:
machineType: E2_HIGHCPU_8
steps:
- id: 'install'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['ci']
- id: 'typecheck'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'typecheck']
waitFor: ['install']
- id: 'lint'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'lint']
waitFor: ['install']
- id: 'test-with-coverage'
name: 'node:20-slim'
entrypoint: 'bash'
args:
- '-c'
- |
npm run test:coverage
COVERAGE=$(cat coverage/coverage-summary.json | python3 -c "
import sys, json
data = json.load(sys.stdin)
print(data['total']['lines']['pct'])
")
echo "Line coverage: $COVERAGE%"
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "FAILED: Coverage $COVERAGE% is below 80% threshold"
exit 1
fi
waitFor: ['install']
- id: 'build-test'
name: 'node:20-slim'
entrypoint: 'npm'
args: ['run', 'build']
waitFor: ['typecheck']
- id: 'docker-build-test'
name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'test-image', '.']
waitFor: ['build-test']
- id: 'trivy-scan'
name: 'aquasec/trivy:latest'
args:
- 'image'
- '--exit-code'
- '1'
- '--severity'
- 'HIGH,CRITICAL'
- '--no-progress'
- 'test-image'
waitFor: ['docker-build-test']Best Practices & Troubleshooting
Building reliable CI/CD pipelines on GCP requires attention to caching, security, parallelism, and operational practices. The following best practices come from production experience with Cloud Build at scale.
Build Performance
- Use kaniko for Docker builds: Kaniko builds Docker images without a Docker daemon, enabling layer caching in Artifact Registry. This can reduce build times by 50-80% for incremental changes.
- Parallelize independent steps: Use the
waitForfield to run independent steps concurrently. Linting, type checking, and unit testing can all run in parallel after dependency installation. - Choose the right machine type: For CPU-bound builds (compilation, bundling), use
e2-highcpu-8or larger. For memory-bound builds (large test suites, monorepos), use machines with more RAM. - Cache dependencies: Store
node_modules,.gradle, orpipcache in Cloud Storage between builds. Restore the cache at the start of the build and upload it at the end.
Common Troubleshooting Issues
| Issue | Cause | Solution |
|---|---|---|
| Build timeout | Default 10-minute timeout too short | Set timeout in cloudbuild.yaml or per-step timeout |
| Permission denied | Cloud Build SA missing roles | Grant required roles to PROJECT_NUMBER@cloudbuild.gserviceaccount.com |
| Docker layer cache miss | Not using --cache-from | Pull the latest image before building and use --cache-from |
| Secret access failure | Missing Secret Manager accessor role | Grant roles/secretmanager.secretAccessor to Cloud Build SA |
| Cannot push to Artifact Registry | Missing Artifact Registry writer role | Grant roles/artifactregistry.writer to Cloud Build SA |
| VPC resources unreachable | Using default pool (no VPC access) | Switch to a private worker pool with VPC peering |
Cloud Build Service Account Permissions
By default, the Cloud Build service account PROJECT_NUMBER@cloudbuild.gserviceaccount.com has the roles/cloudbuild.builds.builder role, which grants limited permissions. For production pipelines, you typically need to grant additional roles: roles/run.admin for Cloud Run deployments, roles/container.developer for GKE deployments, roles/secretmanager.secretAccessor for secrets, and roles/artifactregistry.writer for pushing images. Consider using a custom service account per trigger for fine-grained access control.
Key Takeaways
- 1Cloud Build provides serverless CI/CD with per-second billing and Docker-based build steps.
- 2Build triggers automate pipeline execution from GitHub, Cloud Source Repositories, and Bitbucket.
- 3Artifact Registry stores container images, language packages, and OS packages securely.
- 4Cloud Deploy provides managed continuous delivery with canary and rolling deployments to GKE and Cloud Run.
- 5Binary Authorization enforces that only signed, trusted images are deployed to production.
- 6Private worker pools enable builds within your VPC for accessing private resources.
Frequently Asked Questions
How much does Cloud Build cost?
What is the difference between Cloud Build and Cloud Deploy?
Can I use Cloud Build with GitHub?
What is binary authorization?
Can I access private resources during builds?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.