Skip to main content
GCPSecuritybeginner

Security Command Center

Get started with GCP Security Command Center for threat detection and vulnerability management.

CloudToolStack Team22 min readPublished Feb 22, 2026

Prerequisites

What Is Security Command Center

Security Command Center (SCC) is Google Cloud’s built-in security and risk management platform. It provides a centralized dashboard for identifying misconfigurations, vulnerabilities, threats, and compliance violations across your entire GCP organization. Think of it as your cloud security operations center, a single pane of glass that aggregates security signals from every project, service, and resource in your organization.

SCC comes in three tiers: Standard (free, included with every GCP organization), Premium (paid, includes advanced threat detection, compliance reporting, and vulnerability scanning), and Enterprise (includes multi-cloud support, Chronicle SIEM/SOAR integration, and AI-powered investigation). For any production environment handling sensitive data, the Premium tier is strongly recommended as the minimum.

Unlike third-party security tools that require agents, API integrations, and complex configuration, SCC is deeply integrated with GCP. It automatically discovers every resource in your organization, evaluates their security posture against a comprehensive library of detectors, and correlates findings across multiple signals to identify the most critical risks. This native integration means SCC can detect misconfigurations and threats that external tools often miss.

SCC Tier Comparison

FeatureStandard (Free)PremiumEnterprise
Security Health AnalyticsBasic misconfigurations onlyFull coverage (140+ detectors)Full coverage + custom detectors
Event Threat DetectionNot includedReal-time threat detection from logsEnhanced with Chronicle integration
Container Threat DetectionNot includedRuntime threat detection in GKERuntime threat detection in GKE
Web Security ScannerBasic (manual scans only)Managed scans with OWASP Top 10 coverageManaged scans with OWASP Top 10 coverage
Vulnerability scanningOS vulnerability for Compute EngineOS + container image scanningOS + container + serverless scanning
Compliance reportingNot includedCIS Benchmarks, PCI DSS, NIST 800-53, ISO 27001All Premium frameworks + custom frameworks
Attack path simulationNot includedIdentifies high-risk paths to critical resourcesEnhanced attack path with blast radius analysis
Toxic combination detectionNot includedNot includedAI-powered detection of risky permission combinations
Multi-cloud coverageGCP onlyGCP onlyGCP + AWS + Azure
SIEM/SOAR integrationManual exportPub/Sub notificationsNative Chronicle SIEM + SOAR

Organization-Level Activation Required

SCC operates at the organization level, not the project level. You must have an organization resource (tied to a Google Workspace or Cloud Identity domain) to use SCC. If you are running projects without an organization, you need to create one first. This is also why SCC provides cross-project visibility: it scans every project in your org. To activate SCC Premium, you need thesecuritycenter.admin role at the organization level.

Activating and Configuring SCC

Setting up SCC requires organization-level permissions and a systematic approach to ensure all modules are properly configured. Here is the complete activation workflow.

Activate and configure SCC
# Check your organization ID
gcloud organizations list

# Verify you have the required role
gcloud organizations get-iam-policy ORGANIZATION_ID \
  --filter="bindings.role:roles/securitycenter.admin" \
  --flatten="bindings[].members" \
  --format="table(bindings.members)"

# Enable the Security Command Center API
gcloud services enable securitycenter.googleapis.com \
  --project=my-project

# Activate SCC Premium (organization-level)
# This is typically done through the Console:
# Security > Security Command Center > Settings > Tier

# Enable all built-in services
gcloud scc settings services enable \
  --organization=ORGANIZATION_ID \
  --service=security-health-analytics

gcloud scc settings services enable \
  --organization=ORGANIZATION_ID \
  --service=event-threat-detection

gcloud scc settings services enable \
  --organization=ORGANIZATION_ID \
  --service=container-threat-detection

gcloud scc settings services enable \
  --organization=ORGANIZATION_ID \
  --service=web-security-scanner

gcloud scc settings services enable \
  --organization=ORGANIZATION_ID \
  --service=virtual-machine-threat-detection

# Verify all services are enabled
gcloud scc settings services describe \
  --organization=ORGANIZATION_ID \
  --service=security-health-analytics \
  --format="yaml(serviceEnablementState)"

# Grant the SCC service account access to scan resources
# (This is usually automatic, but verify if scans are not running)
gcloud organizations add-iam-policy-binding ORGANIZATION_ID \
  --member="serviceAccount:service-org-ORGANIZATION_ID@security-center-api.iam.gserviceaccount.com" \
  --role="roles/cloudasset.viewer"

Enable Logging Before Activating SCC

Event Threat Detection (ETD) analyzes Cloud Audit Logs, VPC Flow Logs, and DNS logs to detect threats. If these logs are not enabled, ETD has nothing to analyze and will miss threats. Before activating SCC Premium, ensure that Cloud Audit Logs are enabled for all services (Data Access logs in particular), VPC Flow Logs are enabled on all subnets, and DNS logging is enabled via Cloud DNS policies. Without these log sources, you are paying for threat detection capability that cannot detect threats.

GCP IAM and Org Policies

Security Health Analytics

Security Health Analytics (SHA) is the core scanning engine in SCC. It continuously evaluates your GCP resources against a library of over 140 detectors that identify common misconfigurations. Findings are categorized by severity (Critical, High, Medium, Low) and include detailed remediation guidance. SHA runs automatically; new resources are scanned within minutes of creation, and existing resources are re-evaluated continuously.

Key Finding Categories

Finding CategoryExample DetectorsSeverityRisk
Public accessPublic Cloud SQL instance, public GCS bucket, public BigQuery datasetCritical / HighData exposure to the internet
EncryptionDisk not encrypted with CMEK, Cloud SQL not using SSL, unencrypted Pub/Sub topicMedium / HighData at rest or in transit not protected
IAMPrimitive roles in use, SA key older than 90 days, over-privileged service accountsMedium / HighExcessive permissions increase blast radius
NetworkingOpen firewall to all IPs, VMs with external IPs, legacy networksHighUnrestricted network access
LoggingAudit logging disabled, VPC Flow Logs disabled, DNS logging disabledMediumNo visibility during incidents
ComputeDefault service account in use, serial port enabled, OS login disabledMediumWeak VM security posture
KubernetesLegacy ABAC enabled, dashboard enabled, privileged containersHighGKE cluster misconfiguration
DatabaseCloud SQL no backup, authorized networks too broad, no SSL enforcementMedium / HighData loss or unauthorized database access

Querying and Managing Findings

Query and manage SCC findings
# List all active critical findings
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND severity="CRITICAL"' \
  --format="table(finding.category, finding.resourceName, finding.severity, finding.createTime)"

# List findings for a specific project
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND resource.projectDisplayName="prod-project"' \
  --format="table(finding.category, finding.severity)"

# Get detailed finding with remediation steps
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='finding.category="PUBLIC_BUCKET_ACL"' \
  --format=json

# Count findings by severity across the organization
gcloud scc findings group organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE"' \
  --group-by="severity"

# Count findings by category (find your biggest problem areas)
gcloud scc findings group organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND severity="HIGH"' \
  --group-by="category"

# Mute a finding (accepted risk with justification)
gcloud scc findings update FINDING_NAME \
  --source=SOURCE_ID \
  --organization=ORGANIZATION_ID \
  --mute=MUTED

# Create a mute rule for intentionally public resources
gcloud scc muteconfigs create public-website-bucket \
  --organization=ORGANIZATION_ID \
  --filter='category="PUBLIC_BUCKET_ACL" AND resource.name:"website-assets"' \
  --description="Website assets bucket is intentionally public"

# List all mute configurations (audit accepted risks)
gcloud scc muteconfigs list --organization=ORGANIZATION_ID

Do Not Ignore Medium Findings

Teams often focus only on Critical and High findings, but Medium findings frequently represent the misconfiguration chains that lead to breaches. For example, a “VPC Flow Logs disabled” finding (Medium) means you would have no visibility into network traffic during an incident. A “default service account in use” finding (Medium) means a compromised VM inherits broad project-level permissions. Review all severity levels during your security review process, and set a goal to address Medium findings within 30 days.

Event Threat Detection

Event Threat Detection (ETD) analyzes Cloud Audit Logs, VPC Flow Logs, and DNS logs in near real-time to detect active threats. Unlike Security Health Analytics which finds static misconfigurations, ETD finds dynamic threats, specifically actions that attackers are taking right now. It uses Google’s threat intelligence, machine learning models, and behavioral analysis to identify suspicious activity that point-in-time scanning would miss.

ETD processes log entries within seconds of ingestion, meaning you can detect and respond to threats in near real-time. It compares activity against known indicators of compromise (IoCs) from Google’s Threat Intelligence team, as well as behavioral baselines that identify anomalous patterns unique to your organization.

Threat Categories

Threat CategoryWhat It DetectsLog SourceExample Scenario
Cryptocurrency miningConnections to known mining pools, GPU/CPU abuse patternsVPC Flow Logs, Audit LogsCompromised VM starts mining Monero
Data exfiltrationUnusually large data transfers to external destinationsVPC Flow Logs, Audit LogsInsider copies database backup to external bucket
Brute forceRepeated failed authentication attemptsAudit LogsSSH brute force against a VM’s external IP
MalwareConnections to known C2 serversDNS Logs, VPC Flow LogsVM resolves known malware domain
IAM anomaliesUnusual role grants, SA key creation, API usage from new geosAudit LogsService account suddenly used from unexpected geography
Privilege escalationPermission elevation through IAM changes or impersonationAudit LogsUser grants themselves Owner role
Destructive actionsMass deletion of resources, disabling security controlsAudit LogsAttacker deletes all Cloud SQL backups
Defense evasionDisabling logging, modifying audit configs, deleting logsAudit LogsAttacker disables VPC Flow Logs before lateral movement
Query and respond to ETD findings
# List all active threat findings
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.findingClass="THREAT"' \
  --format="table(finding.category, finding.severity, finding.resourceName, finding.createTime)"

# Get detailed threat finding with indicators
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='finding.category="Malware: Cryptomining Bad Domain"' \
  --format=json

# List findings from Event Threat Detection specifically
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.sourceProperties.sourceId.projectDisplayName="Event Threat Detection"' \
  --format="table(finding.category, finding.severity, finding.resourceName)"

# Check for privilege escalation attempts in the last 24 hours
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.category="Persistence: IAM Anomalous Grant"' \
  --format=json

Correlate ETD Findings with Audit Logs

When ETD detects a threat, your next step is always to investigate the context. Use the finding’s resource name and timestamp to search Cloud Audit Logs for related activity by the same principal. For example, if ETD detects an anomalous IAM grant, search for all actions by that principal in the preceding 24 hours to understand the full scope of the incident. Export both SCC findings and audit logs to BigQuery for efficient cross-referencing during investigations.

Container Threat Detection

For GKE workloads, Container Threat Detection (CTD) provides runtime security monitoring that goes beyond what network-level monitoring can achieve. CTD deploys a lightweight agent (DaemonSet) on GKE nodes that monitors system calls, process activity, and binary execution within containers. This gives you visibility into what is actually happening inside your containers at runtime.

Container-Specific Threats Detected

  • Reverse shell: Detects unexpected outbound shell connections, indicating an attacker has gained interactive access to a container.
  • Suspicious kernel module: Loading of a kernel module from within a container, which could indicate a container escape attempt.
  • Added binary executed: Execution of a binary that was not present in the original container image, indicating a potential compromise where malicious tools have been downloaded.
  • Modified binary executed: Execution of a binary that has been modified since the container started, indicating tampering.
  • Unexpected child process: A process spawned by an application runtime that is not typical for that runtime (e.g., a Python web server spawning a cryptocurrency miner).
  • Unexpected shell: An interactive shell started in a container that typically does not have shell access.
  • Malicious script executed: Detection of known malicious scripts (e.g., coin miners, exploits) based on content analysis.
Enable and monitor Container Threat Detection
# Verify CTD is enabled for your organization
gcloud scc settings services describe \
  --organization=ORGANIZATION_ID \
  --service=container-threat-detection

# Check that CTD DaemonSet is running on your cluster
kubectl get ds -n kube-system | grep container-threat-detection

# List container-specific threat findings
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.category:"Container"' \
  --format="table(finding.category, finding.severity, finding.resourceName, finding.createTime)"

# Get detailed container threat finding
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='finding.category="Added Binary Executed"' \
  --format=json
# The finding includes: container name, pod name, namespace,
# binary path, binary hash, and parent process information

# Set up alert for container threats
gcloud scc notifications create container-threats \
  --organization=ORGANIZATION_ID \
  --pubsub-topic=projects/security-project/topics/scc-findings \
  --filter='state="ACTIVE" AND finding.category:"Container" AND severity="CRITICAL"'

Enable Container Threat Detection on All Clusters

Container Threat Detection has minimal performance impact (less than 2% CPU overhead on nodes) and provides visibility that network-level monitoring cannot achieve. Enable it on all GKE clusters, including development and staging, since compromises often start in lower-security environments and pivot to production. CTD is included with SCC Premium at no additional per-cluster cost.

GKE vs Cloud Run Decision

Attack Path Simulation

Attack path simulation is one of SCC Premium’s most valuable features. Instead of presenting individual findings in isolation, it maps out how an attacker could chain multiple misconfigurations together to reach your most critical resources (called “high-value resources”). This helps you prioritize remediation by showing which findings, if exploited together, pose the greatest actual risk.

How Attack Path Simulation Works

SCC automatically identifies your high-value resources (databases with sensitive data, service accounts with broad permissions, VMs with external access) and then simulates attack paths that could compromise those resources. Each path consists of multiple steps, where each step exploits a specific misconfiguration or vulnerability.

  • Step 1: Attacker exploits a vulnerability in a public-facing service (e.g., unpatched web server with CVE).
  • Step 2: Attacker accesses the VM’s metadata server and obtains the service account credentials (default SA with broad permissions).
  • Step 3: Using the service account, the attacker accesses a Cloud SQL database that allows connections from any internal IP.
  • Step 4: Attacker exfiltrates sensitive data from the database.

By seeing the full attack path, you can identify which single remediation would break the chain most effectively. In the example above, fixing the default service account (Step 2) would block the path even if the vulnerability (Step 1) remains temporarily unpatched.

Query attack path findings
# List attack path findings for high-value resources
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.findingClass="TOXIC_COMBINATION"' \
  --format="table(finding.category, finding.severity, finding.resourceName)"

# Get detailed attack exposure scores
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='finding.attackExposure.score > 0' \
  --format="table(finding.category, finding.attackExposure.score, finding.resourceName)" \
  --sort-by="~finding.attackExposure.score"

# The attack exposure score (0-10) indicates how accessible
# a resource is to an attacker. Higher scores = more exposure.
# Focus on findings with scores above 5 first.

# View which resources SCC considers high-value
gcloud scc custom-modules list \
  --organization=ORGANIZATION_ID

Designate Your High-Value Resources

SCC automatically identifies some high-value resources, but you should explicitly designate your most critical resources through resource value configurations. Mark databases containing PII, service accounts with admin access, and production project resources as high-value. This helps SCC focus attack path simulation on the assets that matter most to your organization, reducing noise and surfacing the most relevant risks.

Compliance Monitoring

SCC Premium maps findings to major compliance frameworks, allowing you to generate compliance reports and track your posture over time. This is invaluable for regulated industries that need to demonstrate compliance during audits, and useful for any organization that wants to measure its security posture against industry standards.

Supported Compliance Frameworks

FrameworkCoverageTypical Use Case
CIS GCP Foundations Benchmark v2.0100+ controls covering IAM, logging, networking, storage, databasesGeneral security baseline for all GCP environments
PCI DSS 3.2.1 / 4.0Controls for cardholder data environmentsOrganizations processing credit card transactions
NIST 800-53 Rev 5US federal security controls (Low/Moderate/High baselines)Government agencies and federal contractors
ISO 27001:2022International information security management standardOrganizations seeking or maintaining ISO certification
SOC 2 Type IITrust services criteria (Security, Availability, Confidentiality)SaaS companies providing services to enterprises
HIPAAHealthcare data protection controlsHealthcare organizations and business associates
Cloud Controls Matrix (CCM)Cloud Security Alliance frameworkCloud-native organizations seeking industry certification
Generate compliance reports
# List compliance findings mapped to CIS benchmarks
gcloud scc findings list organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.compliances.standard="cis-gcp-benchmark"' \
  --format="table(finding.category, finding.severity, finding.compliances)"

# Count non-compliant findings per CIS section
gcloud scc findings group organizations/ORGANIZATION_ID \
  --source=organizations/ORGANIZATION_ID/sources/- \
  --filter='state="ACTIVE" AND finding.compliances.standard="cis-gcp-benchmark"' \
  --group-by="finding.compliances.ids"

# Export compliance data to BigQuery for dashboarding
# First, create a BigQuery dataset
bq mk --dataset security_project:scc_compliance

# Create an export config
gcloud scc bqdatasets create scc-compliance-export \
  --organization=ORGANIZATION_ID \
  --dataset=projects/security-project/datasets/scc_compliance

# Query compliance trends in BigQuery
# SELECT
#   DATE(create_time) as date,
#   compliance_standard,
#   severity,
#   COUNT(*) as finding_count
# FROM security_project.scc_compliance.findings
# WHERE state = 'ACTIVE'
# GROUP BY date, compliance_standard, severity
# ORDER BY date DESC

Compliance Reports Are Not Compliance Certifications

SCC compliance reports show how your technical controls map to framework requirements, but they are not a substitute for a formal audit. A PCI DSS report from SCC demonstrates your GCP infrastructure controls, but a QSA (Qualified Security Assessor) still needs to evaluate your complete cardholder data environment, including policies, procedures, and non-cloud systems. Use SCC reports as evidence during audits and as a continuous monitoring tool between formal assessments.

SIEM Integration and Notification

SCC findings need to reach your security team in real-time. SCC provides several integration mechanisms for routing findings to your existing security operations workflows.

Pub/Sub Notifications

The most flexible integration pattern uses Pub/Sub notifications to stream SCC findings to downstream consumers: your SIEM, ticketing system, Slack channels, or automated remediation functions.

Set up SCC notifications for SIEM integration
# Create a Pub/Sub topic for SCC findings
gcloud pubsub topics create scc-findings-export \
  --project=security-project

# Create notification config for all critical/high findings
gcloud scc notifications create critical-findings \
  --organization=ORGANIZATION_ID \
  --pubsub-topic=projects/security-project/topics/scc-findings-export \
  --filter='state="ACTIVE" AND (severity="CRITICAL" OR severity="HIGH")'

# Create separate notification for threat findings (for immediate response)
gcloud scc notifications create threat-findings \
  --organization=ORGANIZATION_ID \
  --pubsub-topic=projects/security-project/topics/scc-threats \
  --filter='state="ACTIVE" AND finding.findingClass="THREAT"'

# Create notification for compliance violations
gcloud scc notifications create compliance-violations \
  --organization=ORGANIZATION_ID \
  --pubsub-topic=projects/security-project/topics/scc-compliance \
  --filter='state="ACTIVE" AND finding.compliances.standard:"cis-gcp-benchmark"'

# For Splunk integration: create a push subscription
gcloud pubsub subscriptions create scc-to-splunk \
  --topic=scc-findings-export \
  --push-endpoint="https://hec.splunk.example.com:8088/services/collector" \
  --push-auth-service-account=splunk-push@security-project.iam.gserviceaccount.com

# For Chronicle SIEM: use the built-in integration
# Navigate to SCC Settings > Integrations > Chronicle
# Chronicle provides native parsing, correlation, and investigation workflows

# For PagerDuty: route critical findings through Cloud Functions
gcloud pubsub subscriptions create scc-to-pagerduty \
  --topic=scc-threats \
  --push-endpoint="https://us-central1-security-project.cloudfunctions.net/scc-to-pagerduty"

# List all notification configs
gcloud scc notifications list --organization=ORGANIZATION_ID

BigQuery Export for Analysis

For long-term trend analysis, compliance dashboards, and custom reporting, export SCC findings to BigQuery. This enables SQL-based analysis of your security posture over time.

BigQuery export and analysis queries
# Enable continuous export to BigQuery
gcloud scc bqdatasets create scc-export \
  --organization=ORGANIZATION_ID \
  --dataset=projects/security-project/datasets/scc_findings

# Example BigQuery queries for security dashboards:

# 1. Finding trends over time (are we getting better or worse?)
# SELECT
#   DATE(create_time) as date,
#   severity,
#   COUNT(*) as new_findings,
#   SUM(CASE WHEN state = 'ACTIVE' THEN 1 ELSE 0 END) as still_active
# FROM security_project.scc_findings.findings
# WHERE create_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY)
# GROUP BY date, severity
# ORDER BY date DESC

# 2. Mean time to remediate by severity
# SELECT
#   severity,
#   AVG(TIMESTAMP_DIFF(
#     COALESCE(state_change_time, CURRENT_TIMESTAMP()),
#     create_time, HOUR
#   )) as avg_hours_to_remediate,
#   COUNT(*) as total_findings
# FROM security_project.scc_findings.findings
# WHERE create_time > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY)
# GROUP BY severity

# 3. Top 10 projects with the most findings
# SELECT
#   resource.project_display_name,
#   severity,
#   COUNT(*) as finding_count
# FROM security_project.scc_findings.findings
# WHERE state = 'ACTIVE'
# GROUP BY resource.project_display_name, severity
# ORDER BY finding_count DESC
# LIMIT 10

# 4. Finding categories that recur (process problems)
# SELECT
#   category,
#   COUNT(*) as occurrence_count,
#   COUNT(DISTINCT resource.project_display_name) as affected_projects
# FROM security_project.scc_findings.findings
# WHERE state = 'ACTIVE'
# GROUP BY category
# ORDER BY occurrence_count DESC

Automating Remediation

Manually remediating every SCC finding does not scale. For common, well-understood misconfigurations, set up automated remediation using Cloud Functions triggered by SCC notifications through Pub/Sub. This reduces your mean time to remediate (MTTR) from hours or days to seconds.

Remediation Architecture

The automated remediation pipeline follows this pattern:

  1. SCC detects a misconfiguration and creates a finding.
  2. The finding is published to a Pub/Sub topic via a notification config.
  3. A Cloud Function (or Cloud Run service) subscribes to the topic.
  4. The function evaluates the finding, checks allowlists, and applies the fix.
  5. The function logs the remediation action to Cloud Logging and optionally notifies the resource owner.
Auto-remediate common misconfigurations
import functions_framework
import json
import logging
from google.cloud import storage
from google.cloud import compute_v1
from google.cloud import securitycenter

logger = logging.getLogger(__name__)

# Resources that are intentionally public (skip remediation)
ALLOWLISTED_BUCKETS = {
    "public-website-assets",
    "public-static-content",
}

ALLOWLISTED_PROJECTS = {
    "sandbox-project",  # Don't auto-remediate in sandbox
}


@functions_framework.cloud_event
def remediate_finding(cloud_event):
    """Automatically remediate common SCC findings."""
    pubsub_data = json.loads(
        cloud_event.data["message"]["data"]
    )

    finding = pubsub_data.get("finding", {})
    category = finding.get("category", "")
    resource_name = finding.get("resourceName", "")
    severity = finding.get("severity", "")
    project = finding.get("resource", {}).get(
        "projectDisplayName", ""
    )

    # Skip allowlisted projects
    if project in ALLOWLISTED_PROJECTS:
        logger.info(
            f"Skipping remediation for allowlisted project: {project}"
        )
        return

    logger.info(
        f"Processing finding: {category} "
        f"(severity: {severity}) "
        f"on resource: {resource_name}"
    )

    # Route to appropriate remediation handler
    handlers = {
        "PUBLIC_BUCKET_ACL": remediate_public_bucket,
        "OPEN_FIREWALL": remediate_open_firewall,
        "OPEN_SSH_PORT": remediate_open_ssh,
        "OPEN_RDP_PORT": remediate_open_rdp,
        "SQL_PUBLIC_IP": log_and_alert,
    }

    handler = handlers.get(category, log_unknown_finding)
    handler(finding, resource_name)


def remediate_public_bucket(finding, resource_name):
    """Remove public access from GCS buckets."""
    bucket_name = resource_name.split("/")[-1]

    if bucket_name in ALLOWLISTED_BUCKETS:
        logger.info(
            f"Bucket {bucket_name} is allowlisted, skipping"
        )
        return

    client = storage.Client()
    bucket = client.get_bucket(bucket_name)

    # Enable uniform bucket-level access
    bucket.iam_configuration.uniform_bucket_level_access_enabled = True
    bucket.patch()

    # Enforce public access prevention
    bucket.iam_configuration.public_access_prevention = "enforced"
    bucket.patch()

    logger.info(
        f"REMEDIATED: Removed public access from "
        f"bucket {bucket_name}"
    )


def remediate_open_firewall(finding, resource_name):
    """Disable overly permissive firewall rules."""
    # Extract project and firewall name from resource
    parts = resource_name.split("/")
    project_id = parts[4]  # projects/{project}
    firewall_name = parts[-1]

    client = compute_v1.FirewallsClient()
    firewall = client.get(
        project=project_id, firewall=firewall_name
    )

    # Disable the rule instead of deleting it
    # (preserves the rule for investigation)
    firewall.disabled = True
    operation = client.patch(
        project=project_id,
        firewall=firewall_name,
        firewall_resource=firewall,
    )
    operation.result()

    logger.info(
        f"REMEDIATED: Disabled open firewall rule "
        f"{firewall_name} in project {project_id}"
    )


def remediate_open_ssh(finding, resource_name):
    """Handle open SSH port findings."""
    logger.warning(
        f"ALERT: Open SSH port detected on "
        f"{resource_name}. "
        f"Disabling firewall rule."
    )
    remediate_open_firewall(finding, resource_name)


def remediate_open_rdp(finding, resource_name):
    """Handle open RDP port findings."""
    logger.warning(
        f"ALERT: Open RDP port detected on "
        f"{resource_name}. "
        f"Disabling firewall rule."
    )
    remediate_open_firewall(finding, resource_name)


def log_and_alert(finding, resource_name):
    """Log and alert for findings that need human review."""
    category = finding.get("category", "")
    severity = finding.get("severity", "")
    logger.warning(
        f"REQUIRES HUMAN REVIEW: {category} "
        f"(severity: {severity}) "
        f"on {resource_name}"
    )
    # In production: send to PagerDuty, Slack, or Jira


def log_unknown_finding(finding, resource_name):
    """Log findings without automated remediation."""
    category = finding.get("category", "")
    logger.info(
        f"No auto-remediation for: {category} "
        f"on {resource_name}"
    )

Test Remediation in Non-Production First

Automated remediation can cause outages if it modifies resources that are intentionally configured a certain way (e.g., a public website bucket, a firewall rule needed by a partner integration). Always: (1) Test remediation functions thoroughly in a staging environment, (2) Maintain allowlists for intentionally configured resources, (3) Start with a “dry-run” mode that logs what would be changed without actually changing it, (4) Add a manual approval step for high-impact remediations (like disabling firewall rules), (5) Ensure the remediation function sends notifications to the resource owner when it takes action.

Deploy the remediation function
# Deploy the remediation Cloud Function
gcloud functions deploy scc-auto-remediate \
  --gen2 \
  --runtime=python312 \
  --region=us-central1 \
  --source=./remediation-function \
  --entry-point=remediate_finding \
  --trigger-topic=scc-findings-export \
  --service-account=scc-remediation@security-project.iam.gserviceaccount.com \
  --memory=256Mi \
  --timeout=120

# Grant the remediation SA the minimum required roles
# Storage admin for bucket remediation
gcloud organizations add-iam-policy-binding ORGANIZATION_ID \
  --member="serviceAccount:scc-remediation@security-project.iam.gserviceaccount.com" \
  --role="roles/storage.admin" \
  --condition='expression=resource.type == "storage.googleapis.com/Bucket",title=storage-only'

# Compute security admin for firewall remediation
gcloud organizations add-iam-policy-binding ORGANIZATION_ID \
  --member="serviceAccount:scc-remediation@security-project.iam.gserviceaccount.com" \
  --role="roles/compute.securityAdmin"

# SCC findings viewer for reading finding details
gcloud organizations add-iam-policy-binding ORGANIZATION_ID \
  --member="serviceAccount:scc-remediation@security-project.iam.gserviceaccount.com" \
  --role="roles/securitycenter.findingsViewer"
Cloud Functions vs Cloud Run

Web Security Scanner

Web Security Scanner is SCC’s dynamic application security testing (DAST) tool. It crawls your web applications and APIs to identify common vulnerabilities from the OWASP Top 10, including cross-site scripting (XSS), SQL injection, mixed content, outdated libraries, and insecure configurations. Unlike static scanning which analyzes source code, DAST tests your running application the way an attacker would.

Configure Web Security Scanner
# Create a scan configuration
gcloud beta web-security-scanner scan-configs create \
  --display-name="Production API Scan" \
  --starting-urls="https://api.example.com" \
  --project=prod-project

# List scan configs
gcloud beta web-security-scanner scan-configs list \
  --project=prod-project

# Start a scan
gcloud beta web-security-scanner scan-runs start \
  --scan-config=SCAN_CONFIG_ID \
  --project=prod-project

# Check scan status
gcloud beta web-security-scanner scan-runs list \
  --scan-config=SCAN_CONFIG_ID \
  --project=prod-project

# View scan findings
gcloud beta web-security-scanner scan-runs findings list \
  --scan-config=SCAN_CONFIG_ID \
  --scan-run=SCAN_RUN_ID \
  --project=prod-project

# Schedule regular scans (weekly recommended for production)
# Use Cloud Scheduler to trigger scans on a schedule
gcloud scheduler jobs create http weekly-security-scan \
  --schedule="0 3 * * 1" \
  --time-zone="UTC" \
  --uri="https://websecurityscanner.googleapis.com/v1/projects/prod-project/scanConfigs/SCAN_CONFIG_ID/scanRuns" \
  --http-method=POST \
  --oauth-service-account-email=scanner@prod-project.iam.gserviceaccount.com

Scan Staging Before Production

Web Security Scanner actively probes your application, which can create test data, trigger form submissions, and generate unusual traffic patterns. Always run your first scans against a staging environment to understand the impact before scanning production. Configure scan exclusions for destructive endpoints (like delete actions) and rate-limit the scanner to avoid impacting application performance.

Terraform Integration

Managing SCC configuration as code ensures consistency, enables version control, and supports code review for security policy changes. Terraform provides resources for configuring SCC notifications, mute rules, and custom modules.

SCC configuration with Terraform
# Configure SCC notification for critical findings
resource "google_scc_notification_config" "critical_findings" {
  config_id    = "critical-findings-notification"
  organization = "organizations/ORGANIZATION_ID"
  description  = "Notify on critical and high severity findings"
  pubsub_topic = google_pubsub_topic.scc_findings.id

  streaming_config {
    filter = "state = \"ACTIVE\" AND (severity = \"CRITICAL\" OR severity = \"HIGH\")"
  }
}

resource "google_scc_notification_config" "threat_findings" {
  config_id    = "threat-findings-notification"
  organization = "organizations/ORGANIZATION_ID"
  description  = "Notify on all active threat findings"
  pubsub_topic = google_pubsub_topic.scc_threats.id

  streaming_config {
    filter = "state = \"ACTIVE\" AND finding.findingClass = \"THREAT\""
  }
}

# Pub/Sub topics for SCC findings
resource "google_pubsub_topic" "scc_findings" {
  name    = "scc-findings"
  project = var.security_project_id

  message_retention_duration = "604800s"  # 7 days
}

resource "google_pubsub_topic" "scc_threats" {
  name    = "scc-threats"
  project = var.security_project_id

  message_retention_duration = "604800s"
}

# Subscription for the remediation function
resource "google_pubsub_subscription" "remediation" {
  name    = "scc-remediation"
  topic   = google_pubsub_topic.scc_findings.name
  project = var.security_project_id

  push_config {
    push_endpoint = google_cloudfunctions2_function.remediation.url
    oidc_token {
      service_account_email = google_service_account.remediation.email
    }
  }

  retry_policy {
    minimum_backoff = "10s"
    maximum_backoff = "600s"
  }

  dead_letter_policy {
    dead_letter_topic     = google_pubsub_topic.scc_dlq.id
    max_delivery_attempts = 5
  }
}

# Dead letter queue for failed remediations
resource "google_pubsub_topic" "scc_dlq" {
  name    = "scc-remediation-dlq"
  project = var.security_project_id
}

# Mute rules for accepted risks
resource "google_scc_mute_config" "public_website" {
  mute_config_id = "public-website-bucket"
  parent         = "organizations/ORGANIZATION_ID"
  description    = "Website assets bucket is intentionally public"
  filter         = "category = \"PUBLIC_BUCKET_ACL\" AND resource.name : \"website-assets\""
}

resource "google_scc_mute_config" "sandbox_findings" {
  mute_config_id = "sandbox-project-findings"
  parent         = "organizations/ORGANIZATION_ID"
  description    = "Sandbox project findings are reviewed but not actioned"
  filter         = "resource.projectDisplayName = \"sandbox-project\" AND severity != \"CRITICAL\""
}

# BigQuery export for long-term analysis
resource "google_bigquery_dataset" "scc_findings" {
  dataset_id = "scc_findings"
  project    = var.security_project_id
  location   = "US"

  default_table_expiration_ms = null  # No expiration

  access {
    role          = "OWNER"
    special_group = "projectOwners"
  }

  access {
    role           = "WRITER"
    user_by_email  = "scc-export@${var.security_project_id}.iam.gserviceaccount.com"
  }
}

# Service account for remediation
resource "google_service_account" "remediation" {
  account_id   = "scc-remediation"
  display_name = "SCC Auto-Remediation"
  project      = var.security_project_id
}

# Minimum required roles for the remediation SA
resource "google_organization_iam_member" "remediation_storage" {
  org_id = var.organization_id
  role   = "roles/storage.admin"
  member = "serviceAccount:${google_service_account.remediation.email}"
}

resource "google_organization_iam_member" "remediation_compute" {
  org_id = var.organization_id
  role   = "roles/compute.securityAdmin"
  member = "serviceAccount:${google_service_account.remediation.email}"
}

resource "google_organization_iam_member" "remediation_scc" {
  org_id = var.organization_id
  role   = "roles/securitycenter.findingsViewer"
  member = "serviceAccount:${google_service_account.remediation.email}"
}
Terraform on GCP Guide

SCC Operations Playbook

Having SCC enabled is only the beginning. The value comes from how your team operationalizes the findings. Here is a comprehensive playbook for running SCC in production.

Daily Operations

  • Check the SCC dashboard for new Critical and High findings. Assign each finding to an owner within 4 hours.
  • Review ETD threat findings immediately. Any active threat finding should trigger your incident response process.
  • Monitor automated remediation logs to ensure remediations are executing successfully and not causing side effects.

Weekly Operations

  • Review Medium and Low findings. Triage new findings, update mute rules for accepted risks, and track remediation progress.
  • Review the finding trend dashboard. Are total active findings increasing or decreasing? Which categories are growing?
  • Check muted findings. Review muted findings quarterly to ensure the risk acceptance is still valid.
  • Run Web Security Scanner against staging and production.

Monthly Operations

  • Generate compliance reports for your required frameworks (CIS, PCI DSS, NIST, etc.).
  • Review attack path findings. Focus on paths with the highest exposure scores and prioritize breaking the most impactful chains.
  • Update remediation allowlists. Remove entries for resources that no longer need exceptions.
  • Review SCC notification configurations. Ensure notification filters are capturing the right findings for the right teams.

Quarterly Operations

  • Present security posture to leadership. Use BigQuery dashboards to show trends: finding counts by severity, mean time to remediate, compliance percentages, and improvement over time.
  • Conduct a tabletop exercise using ETD threat scenarios to test your incident response process.
  • Review and update SCC tier. Evaluate whether you need to upgrade from Standard to Premium or from Premium to Enterprise based on your security program maturity.

SLA Targets by Severity

SeverityAcknowledgment SLARemediation SLANotification Channel
Critical1 hour24 hoursPagerDuty (page on-call immediately)
High4 hours7 daysSlack #security-alerts + Jira ticket
Medium1 business day30 daysJira ticket in weekly security review
Low1 week90 daysDashboard tracking, quarterly review

Measure Mean Time to Remediate (MTTR)

The most important metric for your SCC program is Mean Time to Remediate (MTTR), the average time between a finding being created and being resolved. Track MTTR by severity level and by team. If MTTR is increasing, it usually means either your team is understaffed, your remediation processes are too manual, or findings are not being routed to the right people. Use automated remediation for common findings to keep MTTR low, and invest in training for the teams with the highest MTTR.

Cost Considerations

Understanding SCC pricing helps you make informed decisions about which tier to use and how to budget for security monitoring.

ComponentPricing ModelTypical Cost
SCC StandardFree$0 (included with every organization)
SCC PremiumPercentage of GCP spend or flat feeTypically 1–3% of total GCP spend
SCC EnterpriseCustom pricing with Chronicle integrationContact Google Cloud sales
Security Health AnalyticsPer resource scannedIncluded in Premium/Enterprise subscription
Event Threat DetectionPer GB of logs analyzedIncluded in Premium/Enterprise subscription
Container Threat DetectionPer GKE nodeIncluded in Premium/Enterprise subscription

SCC Premium ROI

The cost of SCC Premium is typically 1–3% of your total GCP spend. Compare this to the cost of a single security incident (average $4.45M per data breach according to IBM’s 2023 Cost of a Data Breach Report). For any organization with production workloads handling customer data, SCC Premium is one of the highest-ROI security investments you can make. The automated threat detection alone can reduce incident response times from days to minutes.

Best Practices and Security Program Integration

SCC is most effective when it is integrated into your broader security program rather than treated as a standalone tool. Here are best practices for maximizing SCC’s value:

Configuration Best Practices

  • Enable Premium tier for all production organizations. The cost is a fraction of what a security incident would cost, and the advanced threat detection capabilities are essential for timely incident response.
  • Enable all log sources before activating ETD: Cloud Audit Logs (Admin Activity and Data Access), VPC Flow Logs on all subnets, DNS logging, and Cloud Load Balancing logs.
  • Designate high-value resources so attack path simulation focuses on your most critical assets.
  • Manage all SCC configuration via Terraform including notification configs, mute rules, and export settings.

Operational Best Practices

  • Assign finding owners: Route findings to the team that owns the resource, not to a central security team. Use SCC’s notification filtering to send findings to project-specific Slack channels or ticketing queues.
  • Document all muted findings: Every mute rule should have a documented justification, an expiration date, and an owner who is responsible for re-evaluating the risk.
  • Automate what you can: Start with auto-remediation for clear-cut issues (public buckets, open firewalls) and gradually expand coverage as you build confidence.
  • Track findings as KPIs: Include finding counts, MTTR, and compliance scores in your team’s performance metrics. What gets measured gets improved.

Integration Best Practices

  • Forward to your SIEM: Send findings to Splunk, Chronicle, or your existing SIEM for correlation with other security signals (endpoint detection, application logs, identity events).
  • Export to BigQuery: Build long-term trend dashboards that show security posture improvement over time. Present these to leadership quarterly.
  • Integrate with CI/CD: Use Artifact Analysis (integrated with SCC) to scan container images during the build pipeline. Block deployments that contain critical vulnerabilities.
  • Combine with org policies: Use SCC findings to identify where you need preventive controls (org policies) in addition to detective controls (SCC scanning). If SCC repeatedly finds public buckets, deploy an org policy that prevents public access at the organization level.

SCC Is Not a Replacement for Application Security

SCC focuses on infrastructure-level security: misconfigurations, network threats, and compliance. It does not scan your application code for vulnerabilities (use Artifact Analysis for container image scanning and SAST tools for code scanning) or test for application-layer attacks like SQL injection (use Web Security Scanner or a dedicated DAST tool like Burp Suite). A complete security program layers SCC with application security testing, code review, dependency scanning, penetration testing, and security training. SCC is one essential piece of a defense-in-depth strategy, not a complete solution.

GCP VPC Network DesignGCP Architecture FrameworkCloud DNS ConfigurationGCP Cost Optimization Guide

Key Takeaways

  1. 1SCC provides a centralized view of security findings across your GCP organization.
  2. 2Standard tier is free and includes Security Health Analytics and Web Security Scanner.
  3. 3Premium tier adds Event Threat Detection, Container Threat Detection, and Virtual Machine Threat Detection.
  4. 4Findings from multiple sources (SCC, third-party tools) are aggregated into a single dashboard.
  5. 5Security Health Analytics automatically detects misconfigurations like public buckets and open firewall rules.
  6. 6Use Pub/Sub notifications and Cloud Functions for automated remediation of findings.

Frequently Asked Questions

What is Security Command Center?
Security Command Center (SCC) is GCP's native security and risk management platform. It identifies security misconfigurations, detects threats, and provides compliance monitoring across your entire GCP organization in a centralized dashboard.
What is the difference between SCC Standard and Premium?
Standard tier is free and includes Security Health Analytics (misconfiguration detection) and Web Security Scanner. Premium adds Event Threat Detection, Container Threat Detection, VM Threat Detection, Compliance monitoring, and the attack path simulation feature.
How does SCC compare to AWS Security Hub?
Both aggregate security findings into a central dashboard. SCC is tightly integrated with GCP services and includes built-in threat detection. Security Hub aggregates from AWS services and supports third-party integrations. Both provide compliance standard checks.
How do I automate remediation with SCC?
Configure Pub/Sub notifications for SCC findings. Create Cloud Functions triggered by Pub/Sub messages to automatically remediate common issues like making public buckets private, closing open firewall ports, or enabling encryption on unencrypted resources.
Does SCC support multi-cloud security monitoring?
SCC Premium includes multi-cloud support for AWS and Azure. It can ingest findings from other cloud providers and third-party security tools via the Security Command Center API, providing a unified security view across environments.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.