Eventarc Guide
Build event-driven architectures with Eventarc: triggers, Cloud Run destinations, Pub/Sub transport, and routing.
Prerequisites
- Basic understanding of event-driven architecture
- Familiarity with Cloud Run and Pub/Sub
What Is Eventarc?
Eventarc is Google Cloud's event routing service that enables you to build event-driven architectures by connecting event producers to event consumers. It provides a standardized way to receive events from over 130 Google Cloud sources (Cloud Storage, BigQuery, Firestore, Cloud SQL, etc.), custom applications, and third-party sources, and route them to Cloud Run services, Cloud Functions, GKE services, or Workflows.
Eventarc is built on the CloudEvents specification, an open standard for describing event data. Events are transported via Pub/Sub (for most Google Cloud sources) or direct HTTP (for Cloud Audit Log events). This architecture provides at-least-once delivery guarantees, automatic retries, and dead-letter topic support.
This guide covers creating triggers for Google Cloud events, Cloud Audit Log events, custom events via Pub/Sub, routing to Cloud Run and Workflows, event filtering, and production patterns for building reliable event-driven applications.
Eventarc vs Pub/Sub
Eventarc builds on top of Pub/Sub, adding event routing, filtering, and CloudEvents formatting. Use Pub/Sub directly when you need fine-grained control over topics, subscriptions, and message acknowledgment. Use Eventarcwhen you want declarative event routing from Google Cloud services to Cloud Run or Workflows without managing Pub/Sub infrastructure yourself.
Event Sources and Trigger Types
Eventarc supports three categories of event sources, each with a different transport mechanism and trigger configuration.
| Source Type | Transport | Examples | Latency |
|---|---|---|---|
| Direct events | Pub/Sub | Cloud Storage (object created/deleted), Firestore (document written) | Seconds |
| Cloud Audit Logs | Pub/Sub | Any audited API call (VM created, IAM change, etc.) | Minutes |
| Custom / third-party | Pub/Sub channel | Your application events, partner integrations | Seconds |
Creating Triggers for Google Cloud Events
# Trigger Cloud Run when a file is uploaded to Cloud Storage
gcloud eventarc triggers create storage-upload-trigger \
--location=us-central1 \
--destination-run-service=file-processor \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.storage.object.v1.finalized" \
--event-filters="bucket=my-upload-bucket" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com
# Trigger on Firestore document changes
gcloud eventarc triggers create firestore-trigger \
--location=us-central1 \
--destination-run-service=order-processor \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.firestore.document.v1.written" \
--event-filters="database=(default)" \
--event-data-content-type="application/protobuf" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com
# Trigger on BigQuery job completion
gcloud eventarc triggers create bigquery-job-trigger \
--location=us-central1 \
--destination-run-service=post-query-processor \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.bigquery.v2.JobCompleted" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com
# List all triggers
gcloud eventarc triggers list --location=us-central1 \
--format='table(name, destination, eventFilters)'Cloud Run Event Handler
When Eventarc routes an event to Cloud Run, it sends an HTTP POST request with the CloudEvents payload. Your Cloud Run service processes the event and returns a 2xx status code to acknowledge successful processing. Non-2xx responses trigger automatic retries.
from flask import Flask, request
import json
from cloudevents.http import from_http
app = Flask(__name__)
@app.route("/", methods=["POST"])
def handle_event():
"""Handle CloudEvents from Eventarc."""
event = from_http(request.headers, request.get_data())
print(f"Event type: {event['type']}")
print(f"Event source: {event['source']}")
print(f"Event time: {event['time']}")
# Handle Cloud Storage events
if event["type"] == "google.cloud.storage.object.v1.finalized":
data = event.data
bucket = data["bucket"]
name = data["name"]
size = data.get("size", 0)
print(f"File uploaded: gs://{bucket}/{name} ({size} bytes)")
process_uploaded_file(bucket, name)
# Handle Firestore events
elif event["type"] == "google.cloud.firestore.document.v1.written":
data = event.data
print(f"Document changed: {data}")
process_document_change(data)
return ("OK", 200)
def process_uploaded_file(bucket, name):
"""Process an uploaded file (e.g., resize image, extract metadata)."""
from google.cloud import storage
client = storage.Client()
blob = client.bucket(bucket).blob(name)
# Process the file...
print(f"Processed {name} from {bucket}")
def process_document_change(data):
"""Process a Firestore document change."""
print(f"Document changed: {json.dumps(data, indent=2)}")
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)Cloud Audit Log Triggers
Cloud Audit Log triggers react to API calls recorded in Cloud Audit Logs. This enables you to respond to virtually any action in Google Cloud: VM creation, IAM policy changes, secret access, database modifications, and more. Audit Log triggers have higher latency (minutes) than direct event triggers because they depend on audit log ingestion.
# Trigger when a Compute Engine VM is created
gcloud eventarc triggers create vm-created-trigger \
--location=us-central1 \
--destination-run-service=vm-tagger \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=compute.googleapis.com" \
--event-filters="methodName=v1.compute.instances.insert" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com
# Trigger when an IAM policy is changed
gcloud eventarc triggers create iam-change-trigger \
--location=us-central1 \
--destination-run-service=iam-auditor \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=iam.googleapis.com" \
--event-filters="methodName=google.iam.admin.v1.SetIAMPolicy" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com
# Trigger when a Secret Manager secret is accessed
gcloud eventarc triggers create secret-access-trigger \
--location=us-central1 \
--destination-run-service=secret-auditor \
--destination-run-region=us-central1 \
--event-filters="type=google.cloud.audit.log.v1.written" \
--event-filters="serviceName=secretmanager.googleapis.com" \
--event-filters="methodName=google.cloud.secretmanager.v1.SecretManagerService.AccessSecretVersion" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.comEnable Audit Logs
Cloud Audit Log triggers only work if the relevant audit logs are enabled. Data Access audit logs are disabled by default for most services. Enable them in the IAM & Admin console under Audit Logs. Admin Activity audit logs are always enabled and free. Data Access audit logs may incur logging charges based on volume.
Custom Events via Pub/Sub Channels
You can publish custom events from your applications and route them through Eventarc using Pub/Sub channels. This enables a consistent event-driven architecture where both Google Cloud events and application events flow through the same routing layer.
# Create an Eventarc channel for custom events
gcloud eventarc channels create my-app-channel \
--location=us-central1
# Create a trigger for custom events on the channel
gcloud eventarc triggers create custom-order-trigger \
--location=us-central1 \
--destination-run-service=order-processor \
--destination-run-region=us-central1 \
--channel=my-app-channel \
--event-filters="type=com.myapp.order.created" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com# Publish a custom event
from google.cloud import eventarc_publishing_v1
from google.protobuf import timestamp_pb2
import json
import uuid
from datetime import datetime, timezone
client = eventarc_publishing_v1.PublisherClient()
channel = f"projects/PROJECT_ID/locations/us-central1/channels/my-app-channel"
event = {
"@type": "type.googleapis.com/io.cloudevents.v1.CloudEvent",
"id": str(uuid.uuid4()),
"source": "//myapp/orders",
"type": "com.myapp.order.created",
"spec_version": "1.0",
"text_data": json.dumps({
"orderId": "ord-123",
"customerId": "cust-456",
"total": 99.99,
"items": [{"sku": "WIDGET-1", "qty": 3}]
})
}
client.publish_channel_connection_events(
channel_connection=channel,
events=[event]
)
print("Custom event published")Routing to Workflows
Eventarc can route events to Google Cloud Workflows for complex multi-step processing. This is useful when event handling requires orchestrating multiple API calls, conditional logic, error handling, and human approval steps.
# Create a trigger that routes to a Workflow
gcloud eventarc triggers create storage-to-workflow \
--location=us-central1 \
--destination-workflow=file-processing-workflow \
--destination-workflow-location=us-central1 \
--event-filters="type=google.cloud.storage.object.v1.finalized" \
--event-filters="bucket=my-upload-bucket" \
--service-account=eventarc-sa@PROJECT_ID.iam.gserviceaccount.com# file-processing-workflow.yaml
main:
params: [event]
steps:
- extract_info:
assign:
- bucket: ${event.data.bucket}
- filename: ${event.data.name}
- size: ${int(event.data.size)}
- check_file_type:
switch:
- condition: ${text.match_regex(filename, ".*\.csv$")}
next: process_csv
- condition: ${text.match_regex(filename, ".*\.(jpg|png)$")}
next: process_image
next: log_unsupported
- process_csv:
call: http.post
args:
url: https://csv-processor-abc123.run.app/process
body:
bucket: ${bucket}
filename: ${filename}
auth:
type: OIDC
result: csv_result
next: complete
- process_image:
call: http.post
args:
url: https://image-processor-abc123.run.app/process
body:
bucket: ${bucket}
filename: ${filename}
auth:
type: OIDC
result: image_result
next: complete
- log_unsupported:
call: sys.log
args:
text: ${"Unsupported file type - " + filename}
severity: WARNING
next: complete
- complete:
return: "Processing complete"Terraform Configuration
resource "google_eventarc_trigger" "storage_trigger" {
name = "storage-upload-trigger"
location = "us-central1"
project = var.project_id
matching_criteria {
attribute = "type"
value = "google.cloud.storage.object.v1.finalized"
}
matching_criteria {
attribute = "bucket"
value = google_storage_bucket.uploads.name
}
destination {
cloud_run_service {
service = google_cloud_run_v2_service.processor.name
region = "us-central1"
}
}
service_account = google_service_account.eventarc.email
depends_on = [
google_project_iam_member.eventarc_invoker,
google_project_iam_member.eventarc_receiver,
]
}
resource "google_service_account" "eventarc" {
account_id = "eventarc-sa"
display_name = "Eventarc Service Account"
}
resource "google_project_iam_member" "eventarc_invoker" {
project = var.project_id
role = "roles/run.invoker"
member = "serviceAccount:${google_service_account.eventarc.email}"
}
resource "google_project_iam_member" "eventarc_receiver" {
project = var.project_id
role = "roles/eventarc.eventReceiver"
member = "serviceAccount:${google_service_account.eventarc.email}"
}Error Handling and Dead-Letter Topics
When an event delivery fails (non-2xx response from the destination), Eventarc retries delivery based on the underlying Pub/Sub subscription retry policy. You can configure a dead-letter topic to capture events that exhaust all retry attempts, preventing data loss.
# Configure retry and dead-letter for a trigger
# Eventarc uses the underlying Pub/Sub subscription settings
# Get the subscription name from the trigger
gcloud eventarc triggers describe storage-upload-trigger \
--location=us-central1 \
--format='value(transport.pubsub.subscription)'
# Configure dead-letter topic on the subscription
gcloud pubsub subscriptions update SUBSCRIPTION_NAME \
--dead-letter-topic=projects/PROJECT_ID/topics/eventarc-dead-letter \
--max-delivery-attempts=10 \
--min-retry-delay=10s \
--max-retry-delay=600s
# Monitor dead-letter topic for failed events
gcloud pubsub subscriptions pull dead-letter-sub --limit=10 --auto-ackIdempotent Handlers
Since Eventarc provides at-least-once delivery, your event handlers must be idempotent. Use the CloudEvent id field to detect and skip duplicate events. Store processed event IDs in Firestore, Memorystore, or a database with a TTL matching your event deduplication window.
Key Takeaways
- 1Eventarc routes events from 130+ Google Cloud sources to Cloud Run, Workflows, and GKE.
- 2Direct events (Storage, Firestore) arrive in seconds; Audit Log events take minutes.
- 3Custom events flow through Pub/Sub channels using the CloudEvents specification.
- 4Dead-letter topics and retry policies prevent data loss for failed event deliveries.
Frequently Asked Questions
How is Eventarc different from Pub/Sub?
Can Eventarc trigger Workflows?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.