DOKS: Managed Kubernetes on DigitalOcean
Guide to DigitalOcean Kubernetes (DOKS) covering cluster creation, node pools, auto-scaling, networking, storage, Container Registry integration, and security.
Prerequisites
- Basic Kubernetes concepts (pods, services, deployments)
- DigitalOcean account with doctl configured
- kubectl installed locally
DigitalOcean Kubernetes (DOKS) Overview
DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that handles the complexity of running and maintaining a Kubernetes control plane so you can focus on deploying and managing your containerized applications. DOKS provides a fully managed, highly available control plane at no additional cost. You only pay for the worker nodes (Droplets), block storage volumes, and load balancers that your cluster uses.
DOKS stands out for its simplicity compared to EKS, AKS, and GKE. Cluster creation takes minutes, automatic upgrades keep your cluster current, and the integration with DigitalOcean's Container Registry (DOCR) provides seamless image management. For teams that want Kubernetes without the operational overhead of managing control plane infrastructure, DOKS is an excellent choice.
This guide covers cluster creation, node pool management, auto-scaling, networking, storage, security, monitoring, and production best practices for running workloads on DOKS.
Creating a DOKS Cluster
Using doctl
# Create a cluster with HA control plane
doctl kubernetes cluster create prod-k8s \
--region nyc3 \
--version 1.31.1-do.0 \
--ha \
--surge-upgrade \
--auto-upgrade \
--node-pool "name=worker-pool;size=s-4vcpu-8gb;count=3;min-nodes=2;max-nodes=10;auto-scale=true;label=workload=general;tag=k8s;tag=production" \
--vpc-uuid <vpc-uuid> \
--maintenance-window "sunday=04:00" \
--wait
# Save kubeconfig
doctl kubernetes cluster kubeconfig save prod-k8s
# Verify cluster
kubectl get nodes
kubectl cluster-infoFree Control Plane
The DOKS control plane is completely free. You are only charged for the worker nodes, which are billed at standard Droplet rates. A cluster with threes-4vcpu-8gb nodes costs $144/month total ($48 per node). HA control plane is also free and recommended for production clusters.
Kubernetes Versions
DOKS typically supports the three most recent minor versions of Kubernetes. When a new version is released, the oldest supported version is deprecated. Auto-upgrade keeps your cluster on the latest patch version within your chosen minor version. Surge upgrade ensures zero-downtime during node upgrades by creating replacement nodes before draining old ones.
# List available Kubernetes versions
doctl kubernetes options versions
# Upgrade cluster to a new version
doctl kubernetes cluster upgrade prod-k8s --version 1.31.2-do.0Node Pools
A node pool is a group of worker nodes with the same size, configuration, and scaling settings. DOKS supports multiple node pools per cluster, allowing you to run different workload types on appropriately sized nodes. For example, you might have a general-purpose pool for web services and a memory-optimized pool for in-memory caches.
# Add a memory-optimized node pool
doctl kubernetes cluster node-pool create prod-k8s \
--name "cache-pool" \
--size m-2vcpu-16gb \
--count 2 \
--min-nodes 1 \
--max-nodes 4 \
--auto-scale \
--label workload=cache \
--taint "workload=cache:NoSchedule" \
--tag k8s \
--tag cache
# List node pools
doctl kubernetes cluster node-pool list prod-k8s
# Resize a node pool
doctl kubernetes cluster node-pool update prod-k8s <pool-id> \
--count 5Node Pool Autoscaling
DOKS uses the Kubernetes Cluster Autoscaler to automatically add or remove nodes based on pending pod resource requests. When pods cannot be scheduled due to insufficient resources, the autoscaler adds nodes (up to max-nodes). When nodes are underutilized and pods can be rescheduled to fewer nodes, the autoscaler removes nodes (down to min-nodes). The autoscaler evaluates every 10 seconds and typically scales up within 1-2 minutes.
Autoscaling Best Practices
Always set resource requests and limits on your pods. The autoscaler makes decisions based on resource requests, not actual usage. Without requests, the autoscaler cannot determine when to scale. Set min-nodes to at least 2 for production workloads to ensure redundancy, and set max-nodesto a reasonable limit to prevent runaway scaling and unexpected costs.
Networking
Load Balancers
When you create a Kubernetes Service of type LoadBalancer, DOKS automatically provisions a DigitalOcean Load Balancer. The load balancer is configured with health checks, forwarding rules, and SSL termination based on annotations in your Service manifest.
apiVersion: v1
kind: Service
metadata:
name: web-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-size-slug: "lb-small"
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-certificate-id: "<cert-id>"
service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "app.example.com"
spec:
type: LoadBalancer
selector:
app: web
ports:
- name: https
port: 443
targetPort: 8080
- name: http
port: 80
targetPort: 8080Ingress Controllers
For more advanced routing (path-based, host-based, rate limiting, authentication), deploy an Ingress Controller. The NGINX Ingress Controller is available as a one-click installation from the DigitalOcean Marketplace or can be installed via Helm. It creates a single Load Balancer that routes traffic to multiple services based on Ingress rules.
# Install NGINX Ingress Controller via Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-size-slug"="lb-small" \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-certificate-id"="<cert-id>"VPC-Native Networking
DOKS clusters are deployed inside your VPC, and all inter-node communication uses private networking. By default, DOKS uses Cilium as the Container Network Interface (CNI) plugin, which provides high-performance networking with eBPF, network policy enforcement, and built-in observability. Pod-to-pod communication across nodes uses VXLAN encapsulation over the VPC private network.
Storage
Persistent Volumes
DOKS integrates with DigitalOcean Block Storage through the CSI (Container Storage Interface) driver, which is pre-installed on every cluster. When you create a PersistentVolumeClaim, a Block Storage Volume is automatically provisioned and attached to the node running your pod. Volumes are billed at $0.10/GB/month.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: do-block-storage
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: registry.digitalocean.com/myorg/app:latest
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: app-dataContainer Registry Integration
DigitalOcean Container Registry (DOCR) integrates natively with DOKS. When you connect your registry to a cluster, image pull credentials are automatically configured in every namespace, so pods can pull images from your private registry without manual secret management.
# Create a container registry
doctl registry create myorg-registry --subscription-tier professional
# Connect registry to your cluster
doctl registry kubernetes-manifest | kubectl apply -f -
# Or connect via doctl
doctl kubernetes cluster registry add prod-k8s
# Build and push an image
docker build -t registry.digitalocean.com/myorg-registry/api:v1 .
docker push registry.digitalocean.com/myorg-registry/api:v1Monitoring and Observability
DOKS integrates with the DigitalOcean Monitoring stack, which includes a pre-installed Kubernetes Metrics Server for horizontal pod autoscaling. For comprehensive observability, you can install the DigitalOcean Kubernetes Monitoring Stack from the Marketplace, which deploys Prometheus, Grafana, and Alertmanager with pre-configured dashboards for cluster, node, and pod metrics.
# Install the monitoring stack (one-click)
doctl kubernetes 1-click install prod-k8s --1-clicks monitoring
# Or install via Helm for more control
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install kube-prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespaceSecurity Best Practices
- Enable HA control plane for production clusters to ensure control plane availability during maintenance and failures.
- Use Network Policies (enforced by Cilium) to restrict pod-to-pod communication and implement zero-trust networking.
- Run pods as non-root with read-only root filesystems and dropped capabilities.
- Scan container images for vulnerabilities before deploying. DOCR includes built-in vulnerability scanning.
- Use Kubernetes RBAC to limit access to cluster resources. Create separate service accounts for each application.
- Store secrets in Kubernetes Secrets (encrypted at rest on DOKS) or use external secret managers like Vault.
- Enable surge upgrades to ensure zero-downtime during node upgrades and version patches.
- Set resource requests and limits on all pods to prevent resource contention and enable proper autoscaling.
Production Checklist
# Production DOKS cluster checklist:
# [x] HA control plane enabled
# [x] Auto-upgrade enabled with surge upgrade
# [x] Maintenance window set to low-traffic period
# [x] Multiple node pools for different workload types
# [x] Autoscaling configured with appropriate min/max
# [x] NGINX Ingress Controller or equivalent installed
# [x] Container Registry connected for image pull
# [x] Monitoring stack deployed (Prometheus + Grafana)
# [x] Network Policies defined for pod isolation
# [x] Resource requests/limits set on all deployments
# [x] Pod Disruption Budgets configured for critical services
# [x] Cluster deployed in a custom VPC
# [x] Cloud Firewall rules applied to worker nodesDOKS Limitations
DOKS does not support custom control plane configurations, etcd access, or custom admission webhooks that modify the control plane. Node pool sizes are limited to standard Droplet sizes. The maximum number of nodes per cluster is 512, and the maximum number of node pools is 25. If you need more control or higher scale, consider self-managed Kubernetes on dedicated Droplets or a hyperscale provider.
Key Takeaways
- 1DOKS control plane is free; you only pay for worker node Droplets.
- 2HA control plane is free and recommended for all production clusters.
- 3Node pool auto-scaling adds/removes nodes based on pod resource requests.
- 4Cilium CNI provides eBPF-based networking with built-in network policy support.
- 5Block Storage CSI driver auto-provisions persistent volumes from DigitalOcean Volumes.
- 6Container Registry integration provides automatic image pull credentials.
Frequently Asked Questions
How much does DOKS cost?
Does DOKS support auto-scaling?
What CNI does DOKS use?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.