Skip to main content
GCPDevOps & IaCintermediate

Artifact Registry Guide

Guide to Artifact Registry covering Docker, npm, Maven, and Python repositories, remote and virtual repos, cleanup policies, vulnerability scanning, and CI/CD integration.

CloudToolStack Team20 min readPublished Mar 14, 2026

Prerequisites

  • Basic Docker knowledge (building and pushing images)
  • Familiarity with at least one package manager (npm, Maven, pip)
  • GCP project with billing enabled

Introduction to Artifact Registry

Artifact Registry is Google Cloud's fully managed, universal package repository for storing, managing, and securing build artifacts and dependencies. It supports Docker container images, language packages (npm, Maven, Python, Go, NuGet, Ruby), Helm charts, OS packages (APT, Yum), and KubeFlow pipeline artifacts in a single service. Artifact Registry replaces the legacy Container Registry (gcr.io) and is the recommended service for all new projects.

Beyond storage, Artifact Registry integrates deeply with the GCP security ecosystem. It supports IAM-based access control at the repository level, vulnerability scanning via Artifact Analysis, Binary Authorization for deployment-time verification, and VPC Service Controls for network-level isolation. These features make Artifact Registry a cornerstone of any secure software supply chain on GCP.

This guide covers creating repositories for various formats, pushing and pulling artifacts, configuring cleanup policies to manage storage costs, enabling vulnerability scanning, setting up remote and virtual repositories, and integrating with Cloud Build and GKE for end-to-end CI/CD pipelines.

Artifact Registry Pricing

Storage costs $0.10/GB/month. Network egress within the same region is free. Egress to other GCP regions costs standard networking rates. Vulnerability scanning with Artifact Analysis costs $0.26 per container image scanned. Cleanup policies can significantly reduce storage costs by automatically deleting old, untagged, or unused artifacts.

Creating Repositories

Artifact Registry organizes artifacts into repositories, each of which stores a single format (Docker, npm, Maven, etc.). Repositories are regional resources, and you should create them in the same region as your build and deployment infrastructure to minimize latency and egress costs.

bash
# Enable the Artifact Registry API
gcloud services enable artifactregistry.googleapis.com

# Create a Docker repository
gcloud artifacts repositories create docker-repo \
  --repository-format=docker \
  --location=us-central1 \
  --description="Production Docker images"

# Create an npm repository
gcloud artifacts repositories create npm-repo \
  --repository-format=npm \
  --location=us-central1 \
  --description="Internal npm packages"

# Create a Maven repository
gcloud artifacts repositories create maven-repo \
  --repository-format=maven \
  --location=us-central1 \
  --description="Java/Kotlin dependencies" \
  --version-policy=RELEASE

# Create a Python (PyPI) repository
gcloud artifacts repositories create python-repo \
  --repository-format=python \
  --location=us-central1 \
  --description="Internal Python packages"

# Create a Helm chart repository
gcloud artifacts repositories create helm-repo \
  --repository-format=docker \
  --location=us-central1 \
  --description="Helm charts stored as OCI artifacts"

# List all repositories
gcloud artifacts repositories list --location=us-central1

Working with Docker Images

Docker is the most common artifact format in Artifact Registry. The workflow involves authenticating Docker with Artifact Registry, tagging images with the registry path, and pushing them. GKE, Cloud Run, and Cloud Functions can all pull images directly from Artifact Registry using workload identity without additional authentication configuration.

bash
# Configure Docker authentication for Artifact Registry
gcloud auth configure-docker us-central1-docker.pkg.dev

# Build and tag a Docker image
docker build -t us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 .

# Push the image
docker push us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0

# Also tag as latest
docker tag \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:latest
docker push us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:latest

# List images in a repository
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo

# List tags for a specific image
gcloud artifacts docker tags list \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app

# Get image digest and metadata
gcloud artifacts docker images describe \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0

# Delete a specific image tag
gcloud artifacts docker images delete \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 --quiet

# Delete untagged images
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo \
  --include-tags --filter="NOT tags:*" \
  --format="value(version)" | while read digest; do
  gcloud artifacts docker images delete \
    "us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app@${digest}" --quiet
done

Working with npm Packages

Artifact Registry can serve as a private npm registry for your organization's internal packages. You can configure npm to use Artifact Registry as the default registry for scoped packages while continuing to use the public npm registry for open-source dependencies.

bash
# Get npm configuration for your repository
gcloud artifacts print-settings npm \
  --repository=npm-repo \
  --location=us-central1 \
  --scope=@myorg

# Add to .npmrc in your project root
# @myorg:registry=https://us-central1-npm.pkg.dev/MY_PROJECT/npm-repo/
# //us-central1-npm.pkg.dev/MY_PROJECT/npm-repo/:always-auth=true

# Configure npm authentication
npx google-artifactregistry-auth

# Publish a scoped package
npm publish --registry=https://us-central1-npm.pkg.dev/MY_PROJECT/npm-repo/

# Install a package from Artifact Registry
npm install @myorg/my-package

# List npm packages in the repository
gcloud artifacts packages list \
  --repository=npm-repo \
  --location=us-central1

# List versions of a specific package
gcloud artifacts versions list \
  --package=@myorg/my-package \
  --repository=npm-repo \
  --location=us-central1

Working with Maven Artifacts

pom.xml repository configuration
<!-- Add to your pom.xml -->
<repositories>
  <repository>
    <id>artifact-registry</id>
    <url>artifactregistry://us-central1-maven.pkg.dev/MY_PROJECT/maven-repo</url>
    <releases>
      <enabled>true</enabled>
    </releases>
  </repository>
</repositories>

<distributionManagement>
  <repository>
    <id>artifact-registry</id>
    <url>artifactregistry://us-central1-maven.pkg.dev/MY_PROJECT/maven-repo</url>
  </repository>
</distributionManagement>

<!-- Add the wagon extension for AR authentication -->
<build>
  <extensions>
    <extension>
      <groupId>com.google.cloud.artifactregistry</groupId>
      <artifactId>artifactregistry-maven-wagon</artifactId>
      <version>2.2.3</version>
    </extension>
  </extensions>
</build>
bash
# Deploy a Maven artifact
mvn deploy

# Print Maven settings for manual configuration
gcloud artifacts print-settings mvn \
  --repository=maven-repo \
  --location=us-central1

Remote and Virtual Repositories

Remote repositories act as caching proxies for upstream registries like Docker Hub, Maven Central, or npmjs.com. They cache downloaded artifacts locally, reducing external egress costs and improving build reliability (builds succeed even if the upstream registry is temporarily unavailable). Virtual repositories aggregate multiple repositories (both standard and remote) behind a single endpoint.

bash
# Create a remote repository that caches Docker Hub
gcloud artifacts repositories create dockerhub-cache \
  --repository-format=docker \
  --location=us-central1 \
  --mode=remote-repository \
  --remote-repo-config-desc="Docker Hub cache" \
  --remote-docker-repo=DOCKER-HUB

# Create a remote repository that caches npm registry
gcloud artifacts repositories create npmjs-cache \
  --repository-format=npm \
  --location=us-central1 \
  --mode=remote-repository \
  --remote-repo-config-desc="npmjs.com cache" \
  --remote-npm-repo=NPMJS

# Create a remote repository that caches Maven Central
gcloud artifacts repositories create maven-central-cache \
  --repository-format=maven \
  --location=us-central1 \
  --mode=remote-repository \
  --remote-repo-config-desc="Maven Central cache" \
  --remote-mvn-repo=MAVEN-CENTRAL

# Create a virtual repository combining internal + cached repos
gcloud artifacts repositories create npm-virtual \
  --repository-format=npm \
  --location=us-central1 \
  --mode=virtual-repository \
  --upstream-policy-file=- << 'EOF'
[
  {"id": "internal", "repository": "projects/MY_PROJECT/locations/us-central1/repositories/npm-repo", "priority": 100},
  {"id": "npmjs", "repository": "projects/MY_PROJECT/locations/us-central1/repositories/npmjs-cache", "priority": 200}
]
EOF

Remote Repository Benefits

Remote repositories are highly recommended for production builds. They protect against supply chain attacks by caching known-good versions of dependencies, reduce external bandwidth costs, and make builds resilient to upstream registry outages. A single npm build can download hundreds of packages; caching these locally in Artifact Registry saves significant egress costs and build time.

Cleanup Policies

Cleanup policies automatically delete artifacts that match specific criteria, helping you manage storage costs. You can create policies based on tag status (tagged or untagged), tag prefixes, age, and version count. Policies are evaluated daily and delete matching artifacts automatically.

bash
# Create a cleanup policy to delete untagged images older than 7 days
gcloud artifacts repositories set-cleanup-policies docker-repo \
  --location=us-central1 \
  --policy=- << 'EOF'
[
  {
    "name": "delete-untagged",
    "action": {"type": "Delete"},
    "condition": {
      "tagState": "untagged",
      "olderThan": "604800s"
    }
  },
  {
    "name": "keep-last-10-releases",
    "action": {"type": "Keep"},
    "mostRecentVersions": {
      "packageNamePrefixes": ["my-app"],
      "keepCount": 10
    }
  },
  {
    "name": "delete-old-dev-images",
    "action": {"type": "Delete"},
    "condition": {
      "tagPrefixes": ["dev-", "test-", "pr-"],
      "olderThan": "259200s"
    }
  }
]
EOF

# Perform a dry-run to see what would be deleted
gcloud artifacts repositories set-cleanup-policies docker-repo \
  --location=us-central1 \
  --policy=cleanup-policy.json \
  --dry-run

# Check current storage usage
gcloud artifacts repositories describe docker-repo \
  --location=us-central1 \
  --format="value(sizeBytes)"

Vulnerability Scanning

Artifact Analysis (formerly Container Analysis) automatically scans container images for known vulnerabilities when they are pushed to Artifact Registry. Scanning checks OS packages (Debian, Ubuntu, Alpine, Red Hat) and language packages (Java, Go, Python, Node.js) against vulnerability databases that are updated continuously.

bash
# Enable vulnerability scanning
gcloud services enable containeranalysis.googleapis.com
gcloud services enable containerscanning.googleapis.com

# Scanning is automatic for Docker repositories
# After pushing an image, check its vulnerabilities:
gcloud artifacts docker images describe \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 \
  --show-package-vulnerability

# List vulnerabilities for an image
gcloud artifacts vulnerabilities list \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 \
  --format="table(vulnerability.shortDescription,vulnerability.effectiveSeverity,vulnerability.packageIssue[0].affectedPackage,vulnerability.packageIssue[0].fixedPackage)"

# On-demand scanning (scan before pushing)
gcloud artifacts docker images scan \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo/my-app:v1.0.0 \
  --location=us-central1

# Set up Binary Authorization to block images with critical vulns
# (Requires Binary Authorization API and attestor configuration)

Scanning Limitations

Vulnerability scanning identifies known CVEs in OS and language packages but does not scan application source code, configuration files, or secrets embedded in images. For comprehensive security, combine Artifact Analysis with source-code scanning (SonarQube, Snyk), secret detection (gitleaks, truffleHog), and image hardening (distroless base images, non-root users).

IAM and Access Control

bash
# Grant read access to a specific repository
gcloud artifacts repositories add-iam-policy-binding docker-repo \
  --location=us-central1 \
  --member="serviceAccount:my-app@MY_PROJECT.iam.gserviceaccount.com" \
  --role="roles/artifactregistry.reader"

# Grant write access for CI/CD service account
gcloud artifacts repositories add-iam-policy-binding docker-repo \
  --location=us-central1 \
  --member="serviceAccount:cloud-build@MY_PROJECT.iam.gserviceaccount.com" \
  --role="roles/artifactregistry.writer"

# Grant admin access (manage repos, cleanup policies)
gcloud artifacts repositories add-iam-policy-binding docker-repo \
  --location=us-central1 \
  --member="group:devops-team@example.com" \
  --role="roles/artifactregistry.repoAdmin"

# View IAM policy for a repository
gcloud artifacts repositories get-iam-policy docker-repo \
  --location=us-central1

Integration with Cloud Build

cloudbuild.yaml
steps:
  # Build the Docker image
  - name: 'gcr.io/cloud-builders/docker'
    args:
      - 'build'
      - '-t'
      - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app:$COMMIT_SHA'
      - '-t'
      - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app:latest'
      - '.'

  # Run vulnerability scanning
  - name: 'gcr.io/cloud-builders/gcloud'
    args:
      - 'artifacts'
      - 'docker'
      - 'images'
      - 'scan'
      - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app:$COMMIT_SHA'
      - '--location=us-central1'

  # Push to Artifact Registry
  - name: 'gcr.io/cloud-builders/docker'
    args:
      - 'push'
      - '--all-tags'
      - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app'

images:
  - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app:$COMMIT_SHA'
  - 'us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/my-app:latest'

Cleanup

bash
# Delete all images in a repository
gcloud artifacts docker images list \
  us-central1-docker.pkg.dev/MY_PROJECT/docker-repo \
  --format="value(package)" | sort -u | while read pkg; do
  gcloud artifacts packages delete "${pkg}" \
    --repository=docker-repo \
    --location=us-central1 --quiet
done

# Delete repositories
gcloud artifacts repositories delete docker-repo --location=us-central1 --quiet
gcloud artifacts repositories delete npm-repo --location=us-central1 --quiet
gcloud artifacts repositories delete maven-repo --location=us-central1 --quiet
gcloud artifacts repositories delete python-repo --location=us-central1 --quiet
Cloud Build CI/CD GuideCloud Run for ProductionSecurity Command Center Guide

Key Takeaways

  1. 1Artifact Registry supports Docker, npm, Maven, Python, Go, NuGet, and Helm in a single service.
  2. 2Remote repositories cache upstream registries, reducing egress costs and improving build reliability.
  3. 3Virtual repositories aggregate multiple repos behind a single endpoint for transparent dependency resolution.
  4. 4Cleanup policies automatically delete old, untagged, or unused artifacts to manage storage costs.
  5. 5Artifact Analysis scans container images for OS and language package vulnerabilities automatically.
  6. 6IAM controls provide repository-level access control with reader, writer, and admin roles.

Frequently Asked Questions

Should I migrate from Container Registry (gcr.io) to Artifact Registry?
Yes. Container Registry is deprecated and Google recommends migrating to Artifact Registry. Artifact Registry provides all Container Registry features plus multi-format support, repository-level IAM, cleanup policies, and remote/virtual repositories. Google provides a migration tool: gcloud artifacts docker upgrade.
How do remote repositories save money?
Remote repositories cache artifacts from upstream registries (Docker Hub, npmjs.com, Maven Central) in your GCP region. Subsequent pulls are served from the cache, eliminating external egress costs ($0.09-0.12/GB) and reducing build times. For large organizations with frequent builds, this can save hundreds of dollars per month.
Can I use Artifact Registry with non-GCP CI/CD systems?
Yes. Artifact Registry supports standard protocols (Docker Registry v2, npm, Maven, PyPI) and can be used with any CI/CD system that supports these protocols. Authenticate using service account keys, workload identity federation, or the gcloud credential helper.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.