LKE: Managed Kubernetes on Linode
Deploy and manage Kubernetes clusters on LKE with free control plane, autoscaling node pools, persistent storage, ingress, and security best practices.
Prerequisites
- Basic Kubernetes concepts (pods, services, deployments)
- kubectl CLI installed and configured
- Linode account with billing configured
Introduction to Linode Kubernetes Engine (LKE)
Linode Kubernetes Engine (LKE) is Linode's managed Kubernetes service that handles the complexity of Kubernetes control plane operations while giving you full access to the Kubernetes API. What sets LKE apart from competing managed Kubernetes services like Amazon EKS, Azure AKS, and Google GKE is its pricing model: the Kubernetes control plane is completely free. You only pay for the worker node Linodes that run your workloads. This makes LKE one of the most cost-effective managed Kubernetes platforms available, especially for small to medium-sized clusters.
LKE provisions fully managed, CNCF-conformant Kubernetes clusters in minutes. The control plane — including the API server, etcd, scheduler, and controller manager — is managed by Linode with automatic updates, high availability options, and monitoring. Worker nodes run on standard Linode compute instances, giving you access to the full range of plan families (Shared, Dedicated, High Memory, GPU, and Premium) for your node pools.
This guide covers everything from creating your first LKE cluster to configuring autoscaling, setting up ingress controllers, managing persistent storage, implementing security best practices, and running production workloads with confidence.
Creating an LKE Cluster
You can create an LKE cluster through the Cloud Manager, the Linode CLI, the API, or infrastructure-as-code tools like Terraform. Here is how to create a production-ready cluster with multiple node pools:
Using the Cloud Manager
- Navigate to Kubernetes in the left sidebar and click Create Cluster.
- Enter a cluster label (e.g., "prod-cluster") and select a Kubernetes version (use the latest stable version).
- Choose a region where your users and data reside.
- Enable HA Control Plane for production clusters (adds redundancy to the control plane components).
- Add one or more node pools, selecting the plan type and count for each pool.
- Click Create Cluster and wait 2-5 minutes for provisioning.
Using the CLI
# Create an LKE cluster with HA and two node pools
linode-cli lke cluster-create \
--label prod-cluster \
--region us-east \
--k8s_version 1.31 \
--control_plane.high_availability true \
--node_pools '[
{"type": "g6-dedicated-4", "count": 3},
{"type": "g6-dedicated-8", "count": 2}
]'
# Download the kubeconfig
linode-cli lke kubeconfig-view <cluster-id> --text | base64 -d > ~/.kube/config
# Verify cluster access
kubectl get nodes
kubectl cluster-infoFree Control Plane
LKE does not charge for the Kubernetes control plane, even with HA enabled. Your monthly cost is simply the sum of your worker node Linode prices. For example, a cluster with three Dedicated 4 GB nodes ($36/month each) costs $108/month total, with no additional control plane fees. This contrasts sharply with EKS ($73/month per cluster) and GKE ($73/month for Autopilot clusters).
Understanding Node Pools
Node pools are groups of identically configured Linodes that serve as Kubernetes worker nodes. Each pool specifies a Linode plan type and a desired node count. You can create multiple node pools within a single cluster to support different workload requirements — for example, a Dedicated CPU pool for application pods and a High Memory pool for caching services:
# Add a new node pool to an existing cluster
linode-cli lke pool-create <cluster-id> \
--type g6-highmem-1 \
--count 2
# Scale a node pool
linode-cli lke pool-update <cluster-id> <pool-id> \
--count 5
# Delete a node pool
linode-cli lke pool-delete <cluster-id> <pool-id>
# Recycle (replace) all nodes in a pool
linode-cli lke pool-recycle <cluster-id> <pool-id>When you recycle a pool, LKE gracefully drains each node (respecting PodDisruptionBudgets), deletes it, and provisions a replacement. This is useful for applying kernel updates, rotating nodes for security, or recovering from node-level issues.
Autoscaling Node Pools
LKE supports the Kubernetes Cluster Autoscaler, which automatically adjusts the number of nodes in a pool based on pending pod scheduling. When pods cannot be scheduled due to insufficient resources, the autoscaler adds nodes. When nodes are underutilized, it removes them to save costs:
# Enable autoscaling on a node pool
linode-cli lke pool-update <cluster-id> <pool-id> \
--autoscaler.enabled true \
--autoscaler.min 3 \
--autoscaler.max 10Configure autoscaler behavior through Kubernetes resource requests and limits on your pods. The autoscaler makes scaling decisions based on whether pending pods can be scheduled, not on raw CPU or memory utilization. Set appropriate resource requests to ensure the autoscaler scales at the right time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web
image: my-app:latest
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "1Gi"
# Use nodeSelector to target specific pools
nodeSelector:
lke.linode.com/pool-id: "12345"Autoscaler Best Practices
Set minimum node count to handle your baseline traffic without scaling events. Scaling up takes 1-3 minutes as new Linodes are provisioned and join the cluster. For latency-sensitive applications, maintain enough headroom that scaling events do not impact user experience. Consider using PodDisruptionBudgets to prevent the autoscaler from removing nodes that would disrupt critical workloads.
Persistent Storage with CSI
LKE includes the Linode Block Storage CSI (Container Storage Interface) driver pre-installed, allowing you to provision persistent volumes dynamically. When a pod requests a PersistentVolumeClaim, the CSI driver automatically creates a Linode Block Storage volume and attaches it to the appropriate node:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: linode-block-storage-retain
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: linode-block-storage-retain
resources:
requests:
storage: 50GiLKE provides two storage classes: linode-block-storage (delete volume when PVC is deleted) and linode-block-storage-retain (retain volume after PVC deletion). Use the retain class for databases and other stateful workloads where accidental volume deletion would be catastrophic.
Load Balancing and Ingress
LKE integrates with Linode NodeBalancers to provide Kubernetes LoadBalancer services. When you create a Service of type LoadBalancer, the Linode CCM (Cloud Controller Manager) automatically provisions a NodeBalancer and configures it to route traffic to your pods:
apiVersion: v1
kind: Service
metadata:
name: web-app
annotations:
service.beta.kubernetes.io/linode-loadbalancer-throttle: "20"
service.beta.kubernetes.io/linode-loadbalancer-check-type: "http"
service.beta.kubernetes.io/linode-loadbalancer-check-path: "/health"
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: web-appFor more sophisticated routing (path-based routing, TLS termination, virtual hosts), deploy an Ingress controller like NGINX Ingress or Traefik. Install it with Helm and let it create its own LoadBalancer service:
# Install NGINX Ingress Controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.service.annotations."service\.beta\.kubernetes\.io/linode-loadbalancer-throttle"="20"
# Create an Ingress resource
kubectl apply -f - <<'YAML'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app
port:
number: 80
YAMLSecurity Best Practices
Securing an LKE cluster requires attention at multiple layers — the cluster itself, the network, and the workloads running inside:
RBAC and Authentication
LKE uses the kubeconfig token for authentication by default. For production clusters, implement proper RBAC to limit what different users and service accounts can do:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: app-deployer
rules:
- apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: production
name: deploy-binding
subjects:
- kind: ServiceAccount
name: ci-deployer
namespace: production
roleRef:
kind: Role
name: app-deployer
apiGroup: rbac.authorization.k8s.ioNetwork Policies
LKE uses Cilium as its default CNI, which supports Kubernetes NetworkPolicies for fine-grained traffic control between pods. Implement network policies to restrict communication to only the paths your application requires:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web-frontend
ports:
- protocol: TCP
port: 8080Pod Security Standards
Enforce pod security standards to prevent containers from running with unnecessary privileges. Use Kubernetes Pod Security Admission to enforce restricted profiles:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/audit: restrictedMonitoring and Observability
LKE clusters benefit from standard Kubernetes observability tooling. Deploy the Prometheus/Grafana stack for comprehensive monitoring:
# Install kube-prometheus-stack
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring \
--create-namespace \
--set grafana.service.type=NodePort \
--set prometheus.prometheusSpec.retention=30dKey metrics to monitor on LKE include node CPU and memory utilization, pod restart counts, API server request latency, persistent volume usage, and network throughput. Set up alerts for node NotReady conditions, pod OOMKilled events, and persistent volume capacity thresholds.
Upgrading Kubernetes Versions
LKE supports in-place Kubernetes version upgrades through the Cloud Manager and CLI. Upgrades update the control plane first, then you can recycle node pools to update worker nodes. Always test upgrades in a staging cluster before applying them to production:
# View available Kubernetes versions
linode-cli lke versions-list
# Upgrade cluster control plane
linode-cli lke cluster-update <cluster-id> \
--k8s_version 1.31
# Recycle node pools to update worker nodes
linode-cli lke pool-recycle <cluster-id> <pool-id>Upgrade Considerations
Before upgrading, review the Kubernetes changelog for breaking changes and deprecated APIs. Test your workloads against the new version in a staging cluster. Ensure your Helm charts, operators, and custom controllers are compatible with the target version. LKE typically supports the three most recent minor Kubernetes versions.
Cost Optimization for LKE
Keeping LKE costs under control requires thoughtful resource planning:
- Right-size node pools: Use resource monitoring to determine actual CPU and memory consumption. Over-provisioned nodes waste money; under-provisioned nodes cause scheduling failures and poor performance.
- Use autoscaling: Configure autoscaler min/max values to scale down during low-traffic periods and scale up during peak demand.
- Mix plan types: Use Shared CPU nodes for non-critical workloads (dev/staging) and Dedicated CPU for production. High Memory nodes are cost-effective for caching workloads.
- Clean up unused resources: Delete orphaned PersistentVolumes, unused NodeBalancers, and idle node pools. The Block Storage CSI driver creates volumes that persist even after PVC deletion with the retain policy.
- Remember: control plane is free. You can create as many clusters as you need for environment isolation (dev, staging, production) without incurring control plane costs.
Summary
LKE provides a fully managed, CNCF-conformant Kubernetes experience with the industry-leading advantage of a free control plane. Combined with Linode's transparent pricing, flexible node pool options, built-in CSI storage driver, and NodeBalancer integration, LKE offers one of the most cost-effective and developer-friendly managed Kubernetes platforms available. For teams that want the power of Kubernetes without the operational overhead and surprising bills of larger cloud providers, LKE is an outstanding choice.
Key Takeaways
- 1LKE provides a free Kubernetes control plane — you only pay for worker node Linodes.
- 2Node pool autoscaling automatically adjusts cluster size based on pod scheduling demand.
- 3The Linode Block Storage CSI driver enables dynamic persistent volume provisioning.
- 4LoadBalancer services automatically provision Linode NodeBalancers for external traffic.
- 5Cilium CNI provides built-in support for Kubernetes NetworkPolicies.
Frequently Asked Questions
Is the LKE control plane really free?
What CNI does LKE use?
Can I use Helm with LKE?
How do I upgrade Kubernetes versions on LKE?
Written by CloudToolStack Team
Cloud engineers and architects with hands-on experience across AWS, Azure, and GCP. We write guides based on real-world production patterns, not just documentation rewrites.
Disclaimer: This guide is for educational purposes. Cloud services change frequently; always refer to official documentation for the latest information. AWS, Azure, and GCP are trademarks of their respective owners.