Virtual Network Architecture
Design Azure VNets with hub-spoke topology, peering, and private connectivity patterns.
Prerequisites
- Azure subscription with networking permissions
- Understanding of IP addressing and CIDR notation
- Basic familiarity with Azure resource management
Designing Azure Virtual Network Architecture
Azure Virtual Network (VNet) is the fundamental building block of private networking in Azure. Every VM, App Service Environment, AKS cluster, and most PaaS resources with private endpoints operate within VNets. Getting your network architecture right from the start saves enormous refactoring pain later. Redesigning IP addressing or moving from a flat network to hub-and-spoke after production deployment is one of the most disruptive changes an organization can face.
This guide walks through IP planning, subnet design, hub-and-spoke topology, VNet peering, Network Security Groups, private connectivity patterns, route management, and DNS integration. Whether you are building a greenfield Azure environment or evolving an existing one, these patterns represent Microsoft's recommended practices validated across thousands of enterprise deployments.
IP Address Planning
Proper IP address planning is the most critical first step in network architecture. Azure VNets use private RFC 1918 address space, and your plan must account for on-premises networks (if using hybrid connectivity), other VNets, container networking requirements, and future growth. The most common regret in Azure networking is allocating address space that is too small; it is far easier to start with a generous allocation than to expand later.
RFC 1918 Address Ranges
| RFC 1918 Range | CIDR | Available Addresses | Common Use |
|---|---|---|---|
| 10.0.0.0 – 10.255.255.255 | 10.0.0.0/8 | 16,777,216 | Large enterprise networks |
| 172.16.0.0 – 172.31.255.255 | 172.16.0.0/12 | 1,048,576 | Medium organizations |
| 192.168.0.0 – 192.168.255.255 | 192.168.0.0/16 | 65,536 | Small networks, labs |
Avoid Address Overlap
Azure reserves 5 IP addresses in every subnet: the network address (.0), default gateway (.1), two DNS-mapped addresses (.2, .3), and the broadcast address (.255 for /24). A /24 subnet gives you 251 usable IPs, not 256. More importantly, never use address ranges that overlap with on-premises networks, other VNets you plan to peer, or partner networks you may connect to in the future. Document your entire IP allocation in a central IPAM (IP Address Management) system.
Address Allocation Strategy
A well-designed address allocation plan assigns large blocks to each environment and region, with smaller subnets carved out for specific workload tiers. Use a consistent, documented pattern that makes it easy to identify which addresses belong to which environment.
Hub VNet: 10.0.0.0/16 (Central services)
GatewaySubnet: 10.0.0.0/27 (VPN/ExpressRoute Gateway - required name)
AzureFirewallSubnet:10.0.1.0/26 (Azure Firewall - minimum /26)
AzureBastionSubnet: 10.0.2.0/26 (Azure Bastion - minimum /26)
ManagementSubnet: 10.0.10.0/24 (Jump boxes, management tools)
DNSResolverInbound: 10.0.11.0/28 (Private DNS Resolver inbound)
DNSResolverOutbound:10.0.11.16/28 (Private DNS Resolver outbound)
Production VNet: 10.1.0.0/16 (Production workloads)
AppGatewaySubnet: 10.1.0.0/24 (Application Gateway)
WebTier: 10.1.1.0/24 (Frontend VMs/App Services)
AppTier: 10.1.2.0/24 (API/Backend services)
DataTier: 10.1.3.0/24 (Databases, caches)
PrivateEndpoints: 10.1.4.0/24 (Private endpoints for PaaS)
AKSNodes: 10.1.16.0/20 (AKS nodes - needs large range)
AKSPods: 10.1.32.0/20 (AKS pod CIDR if using overlay)
Staging VNet: 10.2.0.0/16 (Staging environment)
(similar subnet structure as production)
Development VNet: 10.3.0.0/16 (Development environment)
(similar subnet structure, smaller subnets)
Reserved: 10.4.0.0/14 (Future environments/regions)
10.8.0.0/13 (Secondary region allocation)Subnet Sizing Guidelines
Different Azure services have specific subnet requirements. Planning subnet sizes based on the services they will host prevents costly resizing operations later.
| Service | Minimum Subnet Size | Recommended Size | IP Consumption Notes |
|---|---|---|---|
| GatewaySubnet | /29 (8 IPs) | /27 (32 IPs) | Required name; supports ExpressRoute coexistence |
| Azure Firewall | /26 (64 IPs) | /26 | Must be named AzureFirewallSubnet |
| Azure Bastion | /26 (64 IPs) | /26 | Must be named AzureBastionSubnet |
| Application Gateway | /28 (16 IPs) | /24 (256 IPs) | Each instance uses 1 IP; v2 can scale to 125 instances |
| AKS (Azure CNI) | /24 per node pool | /20 or larger | Each node uses 1 IP + 30 pod IPs (default max pods/node) |
| AKS (Azure CNI Overlay) | /24 | /24 | Only node IPs from VNet; pod IPs from separate overlay CIDR |
| Private Endpoints | Varies | /24 | 1 IP per private endpoint; grows as you add PaaS services |
| App Service VNet Integration | /28 | /26 or larger | 1 IP per plan instance; dedicated subnet required |
AKS Networking Is the Biggest IP Consumer
Azure Kubernetes Service with Azure CNI networking is by far the largest consumer of IP addresses. With the default configuration of 30 pods per node, a 10-node cluster consumes 310 IPs (10 nodes + 300 pods). Plan for at least a /20 (4,096 IPs) for production AKS clusters to accommodate scaling. Alternatively, use Azure CNI Overlay or kubenet to reduce VNet IP consumption at the cost of some networking features.
Hub-and-Spoke Topology
The hub-and-spoke model is the most widely adopted network architecture in Azure and is the pattern recommended by the Microsoft Cloud Adoption Framework. A central hub VNet contains shared services (firewall, VPN gateway, DNS infrastructure, monitoring), and spoke VNets peer to the hub for connectivity. This pattern provides network isolation between workloads while centralizing security and connectivity controls.
Hub VNet Components
- VPN/ExpressRoute Gateway: Provides hybrid connectivity to on-premises networks. The GatewaySubnet hosts the gateway VMs (managed by Azure).
- Azure Firewall or NVA: Centralized network security appliance that inspects and controls all traffic flowing between spokes, to the internet, and to on-premises.
- Azure Bastion: Secure RDP/SSH access to VMs without exposing them via public IP addresses. Bastion provides browser-based access through the Azure portal.
- DNS Infrastructure: Azure Private DNS Resolver or custom DNS servers that handle name resolution for private endpoints and on-premises integration.
- Shared Services: Domain controllers (if syncing on-premises AD), monitoring agents, or shared management tools.
Spoke VNet Design
Each workload or environment gets its own spoke VNet. Spokes peer to the hub but not directly to each other by default. All spoke-to-spoke and spoke-to-on-premises traffic routes through the hub, enabling centralized inspection and logging. This provides a clear security boundary between workloads.
param location string = resourceGroup().location
// Hub VNet with shared services subnets
resource hubVnet 'Microsoft.Network/virtualNetworks@2023-05-01' = {
name: 'hub-vnet'
location: location
properties: {
addressSpace: {
addressPrefixes: ['10.0.0.0/16']
}
subnets: [
{
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.0.0.0/27'
}
}
{
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.0.1.0/26'
}
}
{
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.0.2.0/26'
}
}
{
name: 'ManagementSubnet'
properties: {
addressPrefix: '10.0.10.0/24'
networkSecurityGroup: {
id: managementNsg.id
}
}
}
]
}
}
// Production Spoke VNet
resource prodSpokeVnet 'Microsoft.Network/virtualNetworks@2023-05-01' = {
name: 'prod-spoke-vnet'
location: location
properties: {
addressSpace: {
addressPrefixes: ['10.1.0.0/16']
}
subnets: [
{
name: 'web-tier'
properties: {
addressPrefix: '10.1.1.0/24'
networkSecurityGroup: { id: webTierNsg.id }
}
}
{
name: 'app-tier'
properties: {
addressPrefix: '10.1.2.0/24'
networkSecurityGroup: { id: appTierNsg.id }
}
}
{
name: 'data-tier'
properties: {
addressPrefix: '10.1.3.0/24'
networkSecurityGroup: { id: dataTierNsg.id }
}
}
{
name: 'private-endpoints'
properties: {
addressPrefix: '10.1.4.0/24'
}
}
]
}
}
// Hub to Spoke peering (enable gateway transit)
resource hubToSpokePeering 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings@2023-05-01' = {
parent: hubVnet
name: 'hub-to-prod'
properties: {
remoteVirtualNetwork: { id: prodSpokeVnet.id }
allowForwardedTraffic: true
allowGatewayTransit: true
allowVirtualNetworkAccess: true
}
}
// Spoke to Hub peering (use remote gateways)
resource spokeToHubPeering 'Microsoft.Network/virtualNetworks/virtualNetworkPeerings@2023-05-01' = {
parent: prodSpokeVnet
name: 'prod-to-hub'
properties: {
remoteVirtualNetwork: { id: hubVnet.id }
allowForwardedTraffic: true
useRemoteGateways: true
allowVirtualNetworkAccess: true
}
}Use Azure Virtual Network Manager for Scale
For organizations with many spokes (10+), Azure Virtual Network Manager (AVNM) simplifies hub-and-spoke at scale. It can automatically manage peering connections, enforce security policies across VNets, and create mesh connectivity groups without managing individual peerings. AVNM also supports connected groups that allow direct spoke-to-spoke communication without routing through the hub, which can reduce latency and firewall costs for trusted workloads.
Hub-and-Spoke vs Azure Virtual WAN
Azure Virtual WAN is a Microsoft-managed networking service that provides hub-and-spoke connectivity with built-in VPN, ExpressRoute, and SD-WAN integration. It abstracts away much of the manual peering and routing configuration.
| Capability | Customer-Managed Hub-Spoke | Azure Virtual WAN |
|---|---|---|
| Peering management | Manual (or AVNM) | Automatic |
| Spoke-to-spoke routing | Via firewall UDRs | Built-in transit routing |
| VPN gateway management | Customer-managed | Microsoft-managed |
| ExpressRoute integration | Manual gateway config | Built-in, multi-circuit |
| Custom routing flexibility | Full control (UDRs) | Limited (routing intent) |
| Third-party NVA support | Any NVA in hub VNet | Limited to supported NVAs |
| Cost | Pay per resource (gateway, firewall) | Hub unit cost + per resource |
| Best for | Customization, existing investments | Large multi-region, SD-WAN, simplicity |
Network Security Groups & Application Security Groups
Network Security Groups (NSGs) are stateful packet filters applied at the subnet or NIC level. They filter traffic based on 5-tuple rules: source/destination IP, source/destination port, and protocol. NSGs are the first line of defense for micro-segmentation within your VNets.
Application Security Groups (ASGs) are a logical abstraction that lets you group NICs by application role. Instead of writing NSG rules that reference specific IP addresses (which change with auto-scaling), you write rules that reference ASGs like “web-servers” or “db-servers.” This makes NSG rules more readable, more maintainable, and more resilient to infrastructure changes.
Three-Tier NSG Architecture
# Create Application Security Groups for each tier
az network asg create -g myRG -n web-servers --location eastus2
az network asg create -g myRG -n app-servers --location eastus2
az network asg create -g myRG -n db-servers --location eastus2
# Create NSG with tiered rules
az network nsg create -g myRG -n three-tier-nsg --location eastus2
# Rule 100: Allow HTTPS from internet to web tier
az network nsg rule create -g myRG --nsg-name three-tier-nsg \
-n AllowHttpsInbound --priority 100 \
--source-address-prefixes Internet \
--destination-asgs web-servers \
--destination-port-ranges 443 --protocol Tcp --access Allow \
--direction Inbound
# Rule 200: Allow web tier to app tier on port 8080
az network nsg rule create -g myRG --nsg-name three-tier-nsg \
-n AllowWebToApp --priority 200 \
--source-asgs web-servers \
--destination-asgs app-servers \
--destination-port-ranges 8080 --protocol Tcp --access Allow \
--direction Inbound
# Rule 300: Allow app tier to database on port 1433
az network nsg rule create -g myRG --nsg-name three-tier-nsg \
-n AllowAppToDb --priority 300 \
--source-asgs app-servers \
--destination-asgs db-servers \
--destination-port-ranges 1433 --protocol Tcp --access Allow \
--direction Inbound
# Rule 400: Allow app tier to Redis cache on port 6380
az network nsg rule create -g myRG --nsg-name three-tier-nsg \
-n AllowAppToRedis --priority 400 \
--source-asgs app-servers \
--destination-asgs db-servers \
--destination-port-ranges 6380 --protocol Tcp --access Allow \
--direction Inbound
# Rule 4096: Deny all other inbound (explicit)
az network nsg rule create -g myRG --nsg-name three-tier-nsg \
-n DenyAllInbound --priority 4096 \
--source-address-prefixes '*' \
--destination-address-prefixes '*' \
--destination-port-ranges '*' --protocol '*' --access Deny \
--direction InboundNSG Best Practices
Always apply NSGs at the subnet level, not the NIC level. Subnet-level NSGs provide a consistent security baseline for all resources in the subnet. NIC-level NSGs can supplement subnet NSGs for additional restrictions on specific VMs, but managing NSGs at both levels increases complexity. Also, always enable NSG flow logs and send them to Traffic Analytics for visibility into traffic patterns and potential security issues.
Private Endpoints & Service Endpoints
Both features secure connectivity to Azure PaaS services, but they work differently and offer different levels of protection. Understanding the differences is critical for choosing the right approach for each scenario.
| Feature | Service Endpoint | Private Endpoint |
|---|---|---|
| IP used for access | Public IP of the PaaS service | Private IP from your VNet subnet |
| DNS resolution | Resolves to public IP | Resolves to private IP (requires Private DNS Zone) |
| On-premises access | Not available from on-premises | Accessible via VPN/ExpressRoute |
| Data exfiltration protection | Limited (service-level only, not resource-specific) | Strong (traffic goes to a specific resource instance) |
| Network path | Optimized route over Azure backbone | Traffic stays entirely within your VNet |
| Cost | Free | ~$7.30/month per endpoint + data processing |
| Recommendation | Simple scenarios, cost-sensitive, Azure-only access | Production workloads, compliance, hybrid access |
param location string = resourceGroup().location
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
name: 'mystorageacct'
location: location
sku: { name: 'Standard_ZRS' }
kind: 'StorageV2'
properties: {
publicNetworkAccess: 'Disabled'
minimumTlsVersion: 'TLS1_2'
supportsHttpsTrafficOnly: true
}
}
resource privateEndpoint 'Microsoft.Network/privateEndpoints@2023-05-01' = {
name: 'storage-pe'
location: location
properties: {
subnet: {
id: resourceId('Microsoft.Network/virtualNetworks/subnets', 'prod-spoke-vnet', 'private-endpoints')
}
privateLinkServiceConnections: [
{
name: 'storage-connection'
properties: {
privateLinkServiceId: storageAccount.id
groupIds: ['blob']
}
}
]
}
}
resource privateDnsZone 'Microsoft.Network/privateDnsZones@2020-06-01' = {
name: 'privatelink.blob.core.windows.net'
location: 'global'
}
resource dnsZoneLink 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2020-06-01' = {
parent: privateDnsZone
name: 'hub-link'
location: 'global'
properties: {
virtualNetwork: {
id: resourceId('Microsoft.Network/virtualNetworks', 'hub-vnet')
}
registrationEnabled: false
}
}
resource dnsZoneGroup 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups@2023-05-01' = {
parent: privateEndpoint
name: 'default'
properties: {
privateDnsZoneConfigs: [
{
name: 'config'
properties: {
privateDnsZoneId: privateDnsZone.id
}
}
]
}
}Private DNS Zones Are Essential
Private Endpoints require Private DNS Zones for correct name resolution (e.g.,privatelink.blob.core.windows.net for Storage). Without proper DNS configuration, clients will resolve to the public IP and bypass the private endpoint entirely. Link Private DNS Zones to all VNets that need access. Centralize Private DNS Zones in the hub subscription and automate zone creation using Azure Policy with theDeployIfNotExists effect.
Route Management and User-Defined Routes
Azure uses system routes by default to direct traffic between subnets, VNets, and the internet. User-Defined Routes (UDRs) override these defaults, allowing you to force traffic through a firewall, Network Virtual Appliance (NVA), or other inspection point. UDRs are essential in hub-and-spoke architectures where all traffic must flow through the central firewall.
Common Routing Patterns
# Create a route table for spoke subnets
az network route-table create \
--name spoke-to-firewall-rt \
--resource-group myRG \
--location eastus2 \
--disable-bgp-route-propagation true
# Default route: all internet traffic goes to Azure Firewall
az network route-table route create \
--name to-internet \
--route-table-name spoke-to-firewall-rt \
--resource-group myRG \
--address-prefix 0.0.0.0/0 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address 10.0.1.4
# Spoke-to-spoke traffic also goes through the firewall
az network route-table route create \
--name to-staging-spoke \
--route-table-name spoke-to-firewall-rt \
--resource-group myRG \
--address-prefix 10.2.0.0/16 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address 10.0.1.4
# On-premises traffic goes through the firewall
az network route-table route create \
--name to-onprem \
--route-table-name spoke-to-firewall-rt \
--resource-group myRG \
--address-prefix 192.168.0.0/16 \
--next-hop-type VirtualAppliance \
--next-hop-ip-address 10.0.1.4
# Associate the route table with spoke subnets
az network vnet subnet update \
--name web-tier \
--vnet-name prod-spoke-vnet \
--resource-group myRG \
--route-table spoke-to-firewall-rt
az network vnet subnet update \
--name app-tier \
--vnet-name prod-spoke-vnet \
--resource-group myRG \
--route-table spoke-to-firewall-rtBGP Route Propagation
When you have a VPN or ExpressRoute gateway, Azure automatically propagates BGP routes to all subnets. In hub-and-spoke with forced tunneling through a firewall, you often need to disable BGP route propagation on spoke route tables (using--disable-bgp-route-propagation true) to prevent spoke subnets from learning direct routes to on-premises that bypass the firewall. The GatewaySubnet and AzureFirewallSubnet should always have BGP propagation enabled.
Azure Bastion for Secure VM Access
Azure Bastion provides secure, browser-based RDP and SSH access to VMs without exposing them via public IP addresses. It is deployed into the hub VNet and can reach VMs in peered spoke VNets. Bastion eliminates the need for jump boxes and the security risks associated with public-facing management ports.
Bastion SKU Comparison
| Feature | Developer SKU | Basic SKU | Standard SKU |
|---|---|---|---|
| Portal RDP/SSH | Yes | Yes | Yes |
| Native client support | No | No | Yes (az network bastion tunnel) |
| Peered VNet access | No | Yes | Yes |
| Scale units | N/A | 2 (fixed) | 2-50 (configurable) |
| File upload/download | No | No | Yes |
| Shareable link | No | No | Yes |
| Cost | Free (preview) | ~$140/month | ~$290/month (2 units) |
# Create Azure Bastion (Standard SKU for native client support)
az network bastion create \
--name hub-bastion \
--resource-group myRG \
--vnet-name hub-vnet \
--sku Standard \
--location eastus2
# Connect via native SSH client through Bastion tunnel
az network bastion ssh \
--name hub-bastion \
--resource-group myRG \
--target-resource-id /subscriptions/<sub-id>/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM \
--auth-type ssh-key \
--username azureuser \
--ssh-key ~/.ssh/id_rsa
# Connect via native RDP client through Bastion tunnel
az network bastion rdp \
--name hub-bastion \
--resource-group myRG \
--target-resource-id /subscriptions/<sub-id>/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myWindowsVMNetwork Governance with Azure Policy
Azure Policy is essential for enforcing network security standards at scale. Without policy enforcement, individual teams may create VMs with public IPs, deploy resources without NSGs, or create subnets without proper security controls.
Recommended Network Policies
- Deny public IP creation: Prevent VMs from receiving public IP addresses
- Require NSG on every subnet: Ensure every new subnet has an NSG attached
- Require specific NSG rules: Enforce baseline deny rules on all NSGs
- Deny VNet creation outside approved ranges: Enforce IP address plan compliance
- Auto-create Private DNS Zone groups: Ensure private endpoints get DNS records
- Require subnets to use route tables: Force traffic through the central firewall
resource denyPublicIPPolicy 'Microsoft.Authorization/policyDefinitions@2021-06-01' = {
name: 'deny-public-ip'
properties: {
displayName: 'Deny Public IP Addresses'
description: 'Prevents creation of public IP address resources to enforce private-only networking'
policyType: 'Custom'
mode: 'All'
metadata: {
category: 'Network'
}
policyRule: {
if: {
field: 'type'
equals: 'Microsoft.Network/publicIPAddresses'
}
then: {
effect: 'deny'
}
}
}
}Multi-Region VNet Architecture
VNets are regional resources; they exist in a single Azure region. Multi-region deployments require VNets in each region, connected via global VNet peering or Virtual WAN. Global peering provides low-latency, high-bandwidth connectivity between VNets across regions using the Microsoft backbone network.
Multi-Region Design Considerations
- Hub per region or single hub: For small deployments, a single hub with global peering to all spokes works. For large deployments, deploy a hub in each region and peer the hubs together for an “interconnected hub” pattern.
- DNS consistency: Private DNS Zones are global, but VNet links are regional. Ensure all VNets across regions are linked to the same Private DNS Zones.
- Data residency: Verify that cross-region traffic paths comply with data residency requirements. Global VNet peering keeps traffic on the Microsoft backbone but may cross geographic boundaries.
- Firewall in each region: For low-latency inspection, deploy Azure Firewall in each regional hub rather than routing all traffic to a single global hub.
Global VNet Peering Costs
Global VNet peering incurs data transfer charges (approximately $0.01/GB for intra-continent and $0.02-0.08/GB for inter-continent transfers). This is in addition to standard egress charges. For high-bandwidth cross-region connectivity, compare the cost of global peering with ExpressRoute Global Reach, which may be more cost-effective for sustained high-volume traffic.
Key Architecture Decisions Checklist
When designing your VNet architecture, work through these critical decisions early in the planning process:
- Region strategy: Which Azure regions will you use? Deploy VNets in each region with global peering or Virtual WAN for connectivity.
- IP address plan: Allocate non-overlapping address space for all current and future VNets, including on-premises networks. Document in a central IPAM tool.
- Hub-spoke vs Virtual WAN: Customer-managed hub-spoke offers more flexibility. Virtual WAN provides automation but with less routing control.
- DNS strategy: Azure-provided DNS, custom DNS servers in the hub, or Azure Private DNS Resolver. Custom DNS is needed for conditional forwarding to on-premises and private endpoint resolution.
- Firewall placement: Azure Firewall or a third-party NVA in the hub provides centralized east-west and north-south traffic inspection.
- Hybrid connectivity: VPN for dev/test and backup, ExpressRoute for production. Consider both active-active VPN and ExpressRoute with VPN failover.
- AKS networking: Azure CNI vs CNI Overlay vs kubenet. This has massive implications for IP address consumption and networking features.
- Governance: Azure Policy to enforce NSG attachment, prevent public IP creation, and require specific subnet configurations across all subscriptions.
Key Takeaways
- 1Hub-spoke topology is the recommended pattern for most enterprise Azure deployments.
- 2VNet peering provides low-latency, high-bandwidth connectivity between virtual networks.
- 3Azure Virtual WAN simplifies hub-spoke at scale with automated routing.
- 4Network Security Groups and Azure Firewall provide layered traffic filtering.
- 5Private endpoints keep traffic to Azure PaaS services off the public internet.
- 6Plan address spaces carefully to avoid overlaps with on-premises and other VNets.
Frequently Asked Questions
What is the hub-spoke network topology in Azure?
What is the difference between VNet peering and VPN Gateway?
How do I connect Azure to my on-premises network?
What address space should I use for Azure VNets?
What is Azure Virtual WAN?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.