Skip to main content
AWSNetworkingadvanced

AWS Transit Gateway Patterns

Deep dive into AWS Transit Gateway covering hub-spoke architecture, inter-region peering, multicast, route tables, centralized egress, and inspection patterns.

CloudToolStack Team28 min readPublished Mar 14, 2026

Prerequisites

What Is AWS Transit Gateway?

AWS Transit Gateway (TGW) is a regional network transit hub that connects VPCs, VPN connections, AWS Direct Connect gateways, and peered Transit Gateways through a central hub. Before Transit Gateway, connecting multiple VPCs required a complex mesh of VPC Peering connections. With N VPCs, you needed N*(N-1)/2 peering connections, each with its own route table entries. This approach does not scale: 10 VPCs require 45 peering connections, and 50 VPCs require 1,225.

Transit Gateway replaces this mesh with a hub-and-spoke model. Every VPC, VPN, and Direct Connect connection attaches to the Transit Gateway, and traffic routes through the hub using configurable route tables. Adding a new VPC takes minutes instead of hours, and you get centralized visibility into all network traffic. Transit Gateway supports up to 5,000 attachments per gateway, handles up to 50 Gbps of bandwidth per VPC attachment, and operates at the network layer with no single point of failure.

This guide covers Transit Gateway architecture patterns from simple hub-and-spoke to advanced multi-region peering, isolated routing domains, shared services VPCs, multicast support, and integration with third-party firewalls. Every pattern includes Terraform and CLI examples you can adapt to your environment.

Transit Gateway Pricing

Transit Gateway charges $0.05 per hour per attachment (approximately $36/month per VPC, VPN, or Direct Connect attachment) plus $0.02 per GB of data processed. There is no charge for Transit Gateway Peering attachments. These costs add up quickly in large environments: 20 VPC attachments cost about $720/month in attachment fees alone, before data transfer costs. Plan your architecture to minimize unnecessary attachments.

Hub-and-Spoke Architecture

The hub-and-spoke pattern is the simplest and most common Transit Gateway deployment. A single Transit Gateway acts as the hub, and each VPC, VPN, or Direct Connect connection is a spoke. All traffic between spokes routes through the hub. A single default route table handles all routing, and every attachment can communicate with every other attachment.

This pattern works well for small to medium environments where all VPCs need to communicate. It is the recommended starting point if you do not have complex isolation requirements.

bash
# Create a Transit Gateway
TGW_ID=$(aws ec2 create-transit-gateway \
  --description "Central network hub" \
  --options '{
    "AmazonSideAsn": 64512,
    "AutoAcceptSharedAttachments": "enable",
    "DefaultRouteTableAssociation": "enable",
    "DefaultRouteTablePropagation": "enable",
    "DnsSupport": "enable",
    "VpnEcmpSupport": "enable",
    "MulticastSupport": "disable"
  }' \
  --tag-specifications 'ResourceType=transit-gateway,Tags=[{Key=Name,Value=central-tgw}]' \
  --query 'TransitGateway.TransitGatewayId' \
  --output text)

echo "Transit Gateway created: $TGW_ID"

# Wait for the TGW to become available
aws ec2 wait transit-gateway-available --transit-gateway-ids $TGW_ID

# Attach VPC A (production workloads)
ATTACH_A=$(aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-0aaa111222333 \
  --subnet-ids subnet-0aaa111 subnet-0aaa222 \
  --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=prod-vpc}]' \
  --query 'TransitGatewayVpcAttachment.TransitGatewayAttachmentId' \
  --output text)

# Attach VPC B (development workloads)
ATTACH_B=$(aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-0bbb444555666 \
  --subnet-ids subnet-0bbb111 subnet-0bbb222 \
  --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=dev-vpc}]' \
  --query 'TransitGatewayVpcAttachment.TransitGatewayAttachmentId' \
  --output text)

# Attach VPC C (shared services: DNS, logging, monitoring)
ATTACH_C=$(aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-0ccc777888999 \
  --subnet-ids subnet-0ccc111 subnet-0ccc222 \
  --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=shared-services-vpc}]' \
  --query 'TransitGatewayVpcAttachment.TransitGatewayAttachmentId' \
  --output text)

echo "Attachments: Prod=$ATTACH_A, Dev=$ATTACH_B, Shared=$ATTACH_C"

# Update VPC route tables to send cross-VPC traffic to the TGW
# For each VPC's route table, add a route for the other VPCs' CIDRs
aws ec2 create-route \
  --route-table-id rtb-prod-private \
  --destination-cidr-block 10.0.0.0/8 \
  --transit-gateway-id $TGW_ID

Subnet Placement for TGW Attachments

Always create dedicated subnets for Transit Gateway attachments. These subnets should be small (/28 is sufficient), placed in each Availability Zone where you have workloads, and should not host any other resources. This isolates TGW ENIs from application traffic and makes network ACL management cleaner. Never attach TGW to public subnets.

Route Tables and Routing Domains

Transit Gateway route tables control how traffic flows between attachments. By default, all attachments are associated with and propagate routes to a single default route table, meaning everything can talk to everything. For production environments, you almost always need multiple route tables to create isolated routing domains.

Each attachment has two relationships with route tables: association(which route table is used to route traffic from this attachment) andpropagation (which route tables learn this attachment's routes). By carefully configuring these relationships, you can create complex network topologies like isolated environments with shared services access.

bash
# Disable default route table association and propagation
# (Do this during TGW creation for new deployments)
# For existing TGW, create custom route tables instead

# Create separate route tables for different routing domains
PROD_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --tag-specifications 'ResourceType=transit-gateway-route-table,Tags=[{Key=Name,Value=prod-rt}]' \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

DEV_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --tag-specifications 'ResourceType=transit-gateway-route-table,Tags=[{Key=Name,Value=dev-rt}]' \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

SHARED_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --tag-specifications 'ResourceType=transit-gateway-route-table,Tags=[{Key=Name,Value=shared-services-rt}]' \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

# Associate attachments with their route tables
aws ec2 associate-transit-gateway-route-table \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id $ATTACH_A

aws ec2 associate-transit-gateway-route-table \
  --transit-gateway-route-table-id $DEV_RT \
  --transit-gateway-attachment-id $ATTACH_B

aws ec2 associate-transit-gateway-route-table \
  --transit-gateway-route-table-id $SHARED_RT \
  --transit-gateway-attachment-id $ATTACH_C

# Propagate shared services routes to all route tables
# (so prod and dev can reach shared services)
aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id $ATTACH_C

aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $DEV_RT \
  --transit-gateway-attachment-id $ATTACH_C

# Propagate prod and dev routes to shared services route table
# (so shared services can respond to both)
aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $SHARED_RT \
  --transit-gateway-attachment-id $ATTACH_A

aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $SHARED_RT \
  --transit-gateway-attachment-id $ATTACH_B

# DO NOT propagate prod routes to dev RT or dev routes to prod RT
# This creates isolation: prod and dev cannot communicate directly

# Verify routes in each table
aws ec2 search-transit-gateway-routes \
  --transit-gateway-route-table-id $PROD_RT \
  --filters "Name=type,Values=propagated,static" \
  --query 'Routes[].{CIDR:DestinationCidrBlock, Type:Type, Attachment:TransitGatewayAttachments[0].TransitGatewayAttachmentId}' \
  --output table

Routing Domain Summary

Route TableAssociated AttachmentsPropagated Routes FromCan Reach
prod-rtProduction VPCsShared Services VPCShared Services only
dev-rtDevelopment VPCsShared Services VPCShared Services only
shared-services-rtShared Services VPCProd VPCs, Dev VPCsAll VPCs (to respond)

Inter-Region Peering

Transit Gateway Peering connects Transit Gateways across AWS regions, enabling multi-region network architectures. Unlike VPC Peering, which is limited to two VPCs, Transit Gateway Peering connects entire network topologies. All VPCs attached to TGW-A in us-east-1 can communicate with all VPCs attached to TGW-B in eu-west-1 through a single peering connection.

Peering connections use AWS's global backbone network (not the public internet), providing consistent latency and bandwidth. Traffic between peered Transit Gateways is encrypted and never traverses the public internet.

bash
# Create a Transit Gateway in the second region
TGW_EU=$(aws ec2 create-transit-gateway \
  --region eu-west-1 \
  --description "EU network hub" \
  --options '{"AmazonSideAsn": 64513}' \
  --tag-specifications 'ResourceType=transit-gateway,Tags=[{Key=Name,Value=eu-tgw}]' \
  --query 'TransitGateway.TransitGatewayId' \
  --output text)

# Wait for both TGWs to be available
aws ec2 wait transit-gateway-available --transit-gateway-ids $TGW_EU --region eu-west-1

# Create a peering attachment from us-east-1 to eu-west-1
PEERING_ATTACH=$(aws ec2 create-transit-gateway-peering-attachment \
  --transit-gateway-id $TGW_ID \
  --peer-transit-gateway-id $TGW_EU \
  --peer-region eu-west-1 \
  --peer-account-id $(aws sts get-caller-identity --query Account --output text) \
  --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=us-eu-peering}]' \
  --query 'TransitGatewayPeeringAttachment.TransitGatewayAttachmentId' \
  --output text)

# Accept the peering attachment in the peer region
aws ec2 accept-transit-gateway-peering-attachment \
  --transit-gateway-attachment-id $PEERING_ATTACH \
  --region eu-west-1

# Add static routes for cross-region traffic
# In us-east-1: route EU VPC CIDRs to the peering attachment
aws ec2 create-transit-gateway-route \
  --transit-gateway-route-table-id $PROD_RT \
  --destination-cidr-block 10.100.0.0/16 \
  --transit-gateway-attachment-id $PEERING_ATTACH

# In eu-west-1: route US VPC CIDRs to the peering attachment
aws ec2 create-transit-gateway-route \
  --transit-gateway-route-table-id $EU_PROD_RT \
  --destination-cidr-block 10.0.0.0/16 \
  --transit-gateway-attachment-id $PEERING_ATTACH \
  --region eu-west-1

Peering Limitations

Transit Gateway Peering does not support route propagation. You must create static routes for cross-region traffic. Peering also does not support transitive routing: if TGW-A peers with TGW-B and TGW-B peers with TGW-C, TGW-A cannot reach TGW-C through TGW-B. You need a direct peering between TGW-A and TGW-C. Plan your peering topology carefully, as a full mesh of N regions requires N*(N-1)/2 peering connections.

Centralized Egress with NAT Gateway

Instead of deploying NAT Gateways in every VPC, you can centralize internet egress through a shared egress VPC. All VPCs route their internet-bound traffic through the Transit Gateway to the egress VPC, which contains NAT Gateways. This reduces cost (fewer NAT Gateways) and simplifies security (one place to monitor outbound traffic).

hcl
# Terraform: Centralized egress VPC with NAT Gateway
resource "aws_vpc" "egress" {
  cidr_block           = "10.255.0.0/16"
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = { Name = "egress-vpc" }
}

resource "aws_subnet" "egress_public" {
  count             = 2
  vpc_id            = aws_vpc.egress.id
  cidr_block        = cidrsubnet(aws_vpc.egress.cidr_block, 8, count.index)
  availability_zone = data.aws_availability_zones.available.names[count.index]
  tags = { Name = "egress-public-${count.index}" }
}

resource "aws_subnet" "egress_tgw" {
  count             = 2
  vpc_id            = aws_vpc.egress.id
  cidr_block        = cidrsubnet(aws_vpc.egress.cidr_block, 8, count.index + 100)
  availability_zone = data.aws_availability_zones.available.names[count.index]
  tags = { Name = "egress-tgw-${count.index}" }
}

resource "aws_nat_gateway" "egress" {
  count         = 2
  allocation_id = aws_eip.nat[count.index].id
  subnet_id     = aws_subnet.egress_public[count.index].id
  tags = { Name = "egress-nat-${count.index}" }
}

# TGW attachment uses dedicated TGW subnets
resource "aws_ec2_transit_gateway_vpc_attachment" "egress" {
  transit_gateway_id = aws_ec2_transit_gateway.main.id
  vpc_id             = aws_vpc.egress.id
  subnet_ids         = aws_subnet.egress_tgw[*].id

  transit_gateway_default_route_table_association = false
  transit_gateway_default_route_table_propagation = false

  tags = { Name = "egress-vpc-attachment" }
}

# Route: TGW subnets -> NAT Gateway for internet access
resource "aws_route_table" "egress_tgw" {
  vpc_id = aws_vpc.egress.id
  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.egress[0].id
  }
  tags = { Name = "egress-tgw-rt" }
}

# Route: NAT Gateway public subnets -> TGW for return traffic
resource "aws_route" "nat_return" {
  route_table_id         = aws_route_table.egress_public.id
  destination_cidr_block = "10.0.0.0/8"   # All spoke VPC CIDRs
  transit_gateway_id     = aws_ec2_transit_gateway.main.id
}

# TGW route table: default route to egress VPC
resource "aws_ec2_transit_gateway_route" "default_to_egress" {
  transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.spokes.id
  destination_cidr_block         = "0.0.0.0/0"
  transit_gateway_attachment_id  = aws_ec2_transit_gateway_vpc_attachment.egress.id
}

Centralized Inspection with Firewall

For organizations that require deep packet inspection, IDS/IPS, or centralized firewall policies, Transit Gateway supports an inspection VPC pattern. All east-west (VPC-to-VPC) and north-south (VPC-to-internet) traffic routes through a dedicated inspection VPC running AWS Network Firewall or a third-party appliance.

The architecture uses Transit Gateway Appliance Mode, which ensures symmetric routing through the firewall. Without appliance mode, return traffic might take a different path than the original request, bypassing the firewall and breaking stateful inspection.

bash
# Enable appliance mode on the inspection VPC attachment
aws ec2 modify-transit-gateway-vpc-attachment \
  --transit-gateway-attachment-id tgw-attach-inspection \
  --options '{"ApplianceModeSupport": "enable"}'

# Create AWS Network Firewall in the inspection VPC
aws network-firewall create-firewall \
  --firewall-name central-firewall \
  --firewall-policy-arn arn:aws:network-firewall:us-east-1:123456789012:firewall-policy/central-policy \
  --vpc-id vpc-inspection \
  --subnet-mappings '[
    {"SubnetId": "subnet-fw-az1"},
    {"SubnetId": "subnet-fw-az2"}
  ]' \
  --description "Centralized inspection firewall"

# Create a firewall policy with rules
aws network-firewall create-firewall-policy \
  --firewall-policy-name central-policy \
  --firewall-policy '{
    "statelessDefaultActions": ["aws:forward_to_sfe"],
    "statelessFragmentDefaultActions": ["aws:forward_to_sfe"],
    "statefulRuleGroupReferences": [
      {
        "resourceArn": "arn:aws:network-firewall:us-east-1:123456789012:stateful-rulegroup/block-malicious",
        "priority": 1
      }
    ]
  }'

# TGW routing for inspection:
# Spoke route table: 0.0.0.0/0 and 10.0.0.0/8 -> inspection VPC attachment
# Inspection route table: specific VPC CIDRs -> respective VPC attachments
#                         0.0.0.0/0 -> egress VPC attachment

Appliance Mode Is Critical

If you route traffic through a stateful firewall without enabling Appliance Mode, you will experience intermittent connectivity failures. Without it, the Transit Gateway may route the forward and return paths of a connection through different Availability Zones, causing asymmetric routing. The firewall sees only half the connection and drops the traffic. Always enable Appliance Mode for inspection VPC attachments.

VPN and Direct Connect Integration

Transit Gateway integrates with Site-to-Site VPN and AWS Direct Connect to extend your on-premises network into the cloud. A VPN or Direct Connect attachment to the Transit Gateway provides connectivity from on-premises to all VPCs attached to the same TGW, eliminating the need for separate VPN connections to each VPC.

bash
# Create a Customer Gateway (represents your on-premises router)
CGW_ID=$(aws ec2 create-customer-gateway \
  --bgp-asn 65000 \
  --public-ip 203.0.113.10 \
  --type ipsec.1 \
  --tag-specifications 'ResourceType=customer-gateway,Tags=[{Key=Name,Value=dc-router}]' \
  --query 'CustomerGateway.CustomerGatewayId' \
  --output text)

# Create a VPN connection attached to the Transit Gateway
VPN_ID=$(aws ec2 create-vpn-connection \
  --type ipsec.1 \
  --customer-gateway-id $CGW_ID \
  --transit-gateway-id $TGW_ID \
  --options '{
    "EnableAcceleration": true,
    "TunnelInsideIpVersion": "ipv4",
    "TunnelOptions": [
      {"PreSharedKey": "your-psk-tunnel1", "TunnelInsideCidr": "169.254.10.0/30"},
      {"PreSharedKey": "your-psk-tunnel2", "TunnelInsideCidr": "169.254.10.4/30"}
    ]
  }' \
  --tag-specifications 'ResourceType=vpn-connection,Tags=[{Key=Name,Value=dc-vpn}]' \
  --query 'VpnConnection.VpnConnectionId' \
  --output text)

# Download the VPN configuration for your router
aws ec2 describe-vpn-connections \
  --vpn-connection-ids $VPN_ID \
  --query 'VpnConnections[0].CustomerGatewayConfiguration' \
  --output text > vpn-config.xml

echo "Configure your on-premises router with: vpn-config.xml"

# For Direct Connect, create a Transit Gateway association
# on an existing Direct Connect Gateway
aws directconnect create-transit-gateway-association \
  --direct-connect-gateway-id dxgw-abc123 \
  --gateway-id $TGW_ID \
  --add-allowed-prefixes '[{"cidr": "10.0.0.0/8"}]'

Use ECMP for VPN Bandwidth

A single VPN tunnel provides approximately 1.25 Gbps of throughput. To increase bandwidth, create multiple VPN connections to the same Transit Gateway (up to 50) and enable ECMP (Equal-Cost Multi-Path) routing. Transit Gateway automatically load-balances traffic across all active tunnels. Four VPN connections provide about 5 Gbps of aggregate throughput, a cost-effective alternative to Direct Connect for moderate bandwidth requirements.

Multicast Support

Transit Gateway supports IP multicast, enabling you to distribute the same data to multiple subscribers efficiently. This is useful for financial market data distribution, media streaming, software updates, and IoT telemetry. Multicast on Transit Gateway supports both IGMP (Internet Group Management Protocol) for dynamic group membership and static group membership.

bash
# Create a multicast-enabled Transit Gateway
TGW_MC=$(aws ec2 create-transit-gateway \
  --description "Multicast-enabled TGW" \
  --options '{
    "MulticastSupport": "enable",
    "DefaultRouteTableAssociation": "enable",
    "DefaultRouteTablePropagation": "enable"
  }' \
  --query 'TransitGateway.TransitGatewayId' \
  --output text)

# Create a multicast domain
MC_DOMAIN=$(aws ec2 create-transit-gateway-multicast-domain \
  --transit-gateway-id $TGW_MC \
  --options '{
    "Igmpv2Support": "enable",
    "StaticSourcesSupport": "enable",
    "AutoAcceptSharedAssociations": "enable"
  }' \
  --tag-specifications 'ResourceType=transit-gateway-multicast-domain,Tags=[{Key=Name,Value=market-data}]' \
  --query 'TransitGatewayMulticastDomain.TransitGatewayMulticastDomainId' \
  --output text)

# Associate subnets with the multicast domain
aws ec2 associate-transit-gateway-multicast-domain \
  --transit-gateway-multicast-domain-id $MC_DOMAIN \
  --transit-gateway-attachment-id tgw-attach-producer \
  --subnet-ids subnet-producer-az1

aws ec2 associate-transit-gateway-multicast-domain \
  --transit-gateway-multicast-domain-id $MC_DOMAIN \
  --transit-gateway-attachment-id tgw-attach-consumer \
  --subnet-ids subnet-consumer-az1

# Register a multicast source
aws ec2 register-transit-gateway-multicast-group-sources \
  --transit-gateway-multicast-domain-id $MC_DOMAIN \
  --group-ip-address 239.1.1.1 \
  --network-interface-ids eni-producer-instance

# Register multicast group members (receivers)
aws ec2 register-transit-gateway-multicast-group-members \
  --transit-gateway-multicast-domain-id $MC_DOMAIN \
  --group-ip-address 239.1.1.1 \
  --network-interface-ids eni-consumer1 eni-consumer2 eni-consumer3

Monitoring and Troubleshooting

Transit Gateway publishes flow logs and metrics to CloudWatch, enabling visibility into traffic patterns, attachment utilization, and dropped packets. Enable flow logs on every TGW attachment for security auditing and troubleshooting connectivity issues.

bash
# Enable Transit Gateway flow logs
aws ec2 create-flow-log \
  --resource-type TransitGatewayAttachment \
  --resource-ids $ATTACH_A \
  --traffic-type ALL \
  --log-destination-type cloud-watch-logs \
  --log-group-name /aws/tgw/flow-logs \
  --deliver-logs-permission-arn arn:aws:iam::123456789012:role/TGWFlowLogRole \
  --max-aggregation-interval 60 \
  --log-format '${version} ${account-id} ${tgw-id} ${tgw-attachment-id} ${tgw-src-vpc-account-id} ${tgw-dst-vpc-account-id} ${tgw-src-vpc-id} ${tgw-dst-vpc-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status}'

# View TGW CloudWatch metrics
aws cloudwatch get-metric-statistics \
  --namespace AWS/TransitGateway \
  --metric-name BytesIn \
  --dimensions Name=TransitGateway,Value=$TGW_ID \
  --start-time $(date -u -v-1H '+%Y-%m-%dT%H:%M:%S') \
  --end-time $(date -u '+%Y-%m-%dT%H:%M:%S') \
  --period 300 \
  --statistics Sum \
  --output table

# Check Transit Gateway route tables for debugging
aws ec2 search-transit-gateway-routes \
  --transit-gateway-route-table-id $PROD_RT \
  --filters "Name=type,Values=propagated,static" \
  --output table

# Use Network Manager for visualization
aws networkmanager create-global-network \
  --description "Global network topology"

aws networkmanager register-transit-gateway \
  --global-network-id global-network-id \
  --transit-gateway-arn arn:aws:ec2:us-east-1:123456789012:transit-gateway/$TGW_ID

Terraform Module Pattern

For reproducible deployments, wrap your Transit Gateway configuration in a Terraform module. This module handles TGW creation, route tables, and provides outputs for spoke VPCs to reference when creating attachments.

hcl
# modules/transit-gateway/main.tf

resource "aws_ec2_transit_gateway" "this" {
  description                     = var.description
  amazon_side_asn                 = var.amazon_side_asn
  auto_accept_shared_attachments  = "disable"
  default_route_table_association = "disable"
  default_route_table_propagation = "disable"
  dns_support                     = "enable"
  vpn_ecmp_support                = "enable"
  multicast_support               = var.enable_multicast ? "enable" : "disable"

  tags = merge(var.tags, { Name = var.name })
}

# Create route tables for each routing domain
resource "aws_ec2_transit_gateway_route_table" "this" {
  for_each           = var.route_tables
  transit_gateway_id = aws_ec2_transit_gateway.this.id
  tags               = merge(var.tags, { Name = each.key })
}

# RAM share for cross-account attachments
resource "aws_ram_resource_share" "tgw" {
  count                     = var.share_cross_account ? 1 : 0
  name                      = "${var.name}-share"
  allow_external_principals = false
}

resource "aws_ram_resource_association" "tgw" {
  count              = var.share_cross_account ? 1 : 0
  resource_arn       = aws_ec2_transit_gateway.this.arn
  resource_share_arn = aws_ram_resource_share.tgw[0].arn
}

resource "aws_ram_principal_association" "org" {
  count              = var.share_cross_account ? 1 : 0
  principal          = var.organization_arn
  resource_share_arn = aws_ram_resource_share.tgw[0].arn
}

# Outputs for spoke VPCs
output "transit_gateway_id" {
  value = aws_ec2_transit_gateway.this.id
}

output "route_table_ids" {
  value = { for k, v in aws_ec2_transit_gateway_route_table.this : k => v.id }
}

Cross-Account Sharing with AWS RAM

In multi-account environments (which AWS Organizations encourages), the Transit Gateway is typically owned by a central networking account and shared with spoke accounts using AWS Resource Access Manager (RAM). Spoke accounts can then create VPC attachments to the shared Transit Gateway without needing access to the networking account.

bash
# In the networking account: share the TGW with the organization
aws ram create-resource-share \
  --name "transit-gateway-share" \
  --resource-arns "arn:aws:ec2:us-east-1:111111111111:transit-gateway/$TGW_ID" \
  --principals "arn:aws:organizations::111111111111:organization/o-abc123" \
  --allow-external-principals false

# In a spoke account: create a VPC attachment to the shared TGW
# The spoke account can see the shared TGW
aws ec2 describe-transit-gateways \
  --query 'TransitGateways[?State==`available`].{Id:TransitGatewayId, Owner:OwnerId}' \
  --output table

# Create the attachment (goes to pending acceptance)
aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-spoke-account \
  --subnet-ids subnet-tgw-az1 subnet-tgw-az2

# In the networking account: accept the attachment
aws ec2 accept-transit-gateway-vpc-attachment \
  --transit-gateway-attachment-id tgw-attach-from-spoke

# Associate the attachment with the correct route table
aws ec2 associate-transit-gateway-route-table \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id tgw-attach-from-spoke

Best Practices and Common Mistakes

Use a dedicated networking account. Own the Transit Gateway in a central networking account and share it via RAM. This centralizes route management and prevents spoke accounts from modifying network topology.

Plan your CIDR ranges carefully. Overlapping CIDRs between VPCs attached to the same Transit Gateway cause routing conflicts. Use a structured IP addressing plan (e.g., 10.{region}.{environment}.0/24) and document all allocations.

Disable default route table association. The default behavior of associating all attachments with a single route table means everything can talk to everything. Disable this and create explicit routing domains from the start.

Monitor data transfer costs. Transit Gateway charges $0.02/GB for data processed. In chatty microservices architectures, this can add up. Consider VPC Peering (free data transfer within the same AZ) for high-bandwidth connections between specific VPC pairs.

Do not forget return routes. The most common Transit Gateway debugging issue is asymmetric routing. When you add a route to send traffic from VPC-A to VPC-B through the TGW, you also need a route in VPC-B to send response traffic back through the TGW to VPC-A.

TGW vs. VPC Peering

VPC Peering is free for same-region data transfer and supports higher bandwidth. Use it for high-throughput connections between two specific VPCs. Transit Gateway is better for complex topologies with many VPCs, centralized routing policies, VPN/DX integration, and network segmentation. Many organizations use both: TGW for general connectivity and VPC Peering for specific high-bandwidth pairs.

AWS Networking Deep DiveAWS VPC Architecture PatternsAWS Organizations & SCPs Guide

Key Takeaways

  1. 1Transit Gateway replaces VPC peering mesh with a scalable hub-and-spoke model supporting up to 5,000 attachments.
  2. 2Route tables create isolated routing domains for network segmentation between environments.
  3. 3Inter-region peering requires static routes and does not support transitive routing.
  4. 4Centralized egress through a shared NAT Gateway VPC reduces cost and simplifies monitoring.
  5. 5Appliance Mode is critical for stateful firewall inspection to prevent asymmetric routing.
  6. 6Cross-account sharing via AWS RAM enables centralized network management.

Frequently Asked Questions

When should I use Transit Gateway vs VPC Peering?
Use VPC Peering for high-throughput connections between two specific VPCs (free same-region data transfer). Use Transit Gateway for complex topologies with many VPCs, centralized routing, VPN/Direct Connect integration, and network segmentation. Many organizations use both.
What are Transit Gateway data transfer costs?
Transit Gateway charges $0.05/hour per attachment (~$36/month) plus $0.02 per GB of data processed. VPC Peering is free for same-AZ data transfer and $0.01/GB for cross-AZ. For high-bandwidth pairs, VPC Peering can save significant costs.
Can I connect Transit Gateways across regions?
Yes, using Transit Gateway Peering. Peering connections use AWS's global backbone (not the internet), but require static routes (no route propagation) and do not support transitive routing between three or more peered TGWs.
How do I troubleshoot Transit Gateway connectivity?
Check: (1) TGW route tables have routes for both source and destination CIDRs, (2) VPC route tables send traffic to the TGW, (3) Security groups and NACLs allow the traffic, (4) TGW attachments are in the correct route table associations. Enable VPC Flow Logs and TGW Flow Logs for visibility.

Written by CloudToolStack Team

Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.

Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.