Install a Single-Node Kubernetes Cluster on Ubuntu with Cilium CNI
Build your very own local Kubernetes lab using kubeadm and Cilium in just minutes with this comprehensive setup guide
Quick Navigation
Difficulty: 🟡 Intermediate
Estimated Time: 30-45 minutes
Prerequisites: Ubuntu 20.04+ system, 4+ CPUs & 4+ GB RAM, sudo/root privileges, Basic Linux command line knowledge
What You'll Learn
This tutorial covers essential Kubernetes cluster setup concepts and tools:
- Kubernetes Cluster Setup - Complete single-node installation using kubeadm
- Cilium CNI Integration - Next-generation networking with eBPF
- System Configuration - Kernel modules, networking, and container runtime setup
- Cluster Verification - Testing and validating your installation
- Storage Configuration - Setting up persistent storage for your cluster
- Production Considerations - Best practices for development environments
Prerequisites
- Ubuntu 20.04+ system (Tested on Ubuntu 22.04)
- 4+ CPUs & 4+ GB RAM
- All commands require sudo or root privileges
- Basic Linux command line knowledge
Related Tutorials
- Kubernetes Monitoring - Metrics Server installation
- HPA Autoscaling - Horizontal Pod Autoscaler setup
- Main Tutorials Hub - Step-by-step implementation guides
Introduction
If you're looking to deploy Kubernetes on a single Ubuntu machine for development, testing, or learning purposes, this guide is for you! We'll walk through setting up a mono-node Kubernetes cluster using kubeadm and configuring it with the Cilium CNI, known for its eBPF-powered networking, observability, and security.
Step-by-Step Instructions
Step 1: Uninstall Old Versions (If Applicable)
If you have previous Kubernetes installations, clean them up first:
sudo kubeadm reset -f
sudo apt-get purge -y kubeadm kubectl kubelet kubernetes-cni kube*
sudo yum remove -y kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove -y
sudo rm -rf ~/.kube
Step 2: Disable Swap
Kubernetes requires swap to be disabled for proper memory management and scheduling:
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Step 3: Enable Kernel Modules and Network Settings
Load br_netfilter Module
# Load br_netfilter
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
sudo modprobe br_netfilter
Apply sysctl Settings
# Apply sysctl settings
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
What these settings do:
br_netfilter
: Enables iptables to see bridged trafficbridge-nf-call-*
: Allows iptables to filter bridged trafficip_forward
: Enables IP forwarding for pod networking
Step 4: Install Container Runtime (containerd)
sudo apt update && sudo apt install -y containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
Key Configuration Changes:
SystemdCgroup = true
: Enables systemd cgroup driver for better resource management- This ensures compatibility with Kubernetes and systemd
Step 5: Install kubeadm, kubelet, and kubectl
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl
# Add Kubernetes GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key \
| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes.gpg
# Add Kubernetes repository
echo "deb [signed-by=/etc/apt/trusted.gpg.d/kubernetes.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \
| sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install Kubernetes components
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
# Prevent automatic updates
sudo apt-mark hold kubelet kubeadm kubectl
Components Explained:
- kubeadm: Tool for bootstrapping Kubernetes clusters
- kubelet: Agent that runs on each node
- kubectl: Command-line tool for interacting with the cluster
Step 6: Initialize the Kubernetes Cluster
Use a Cilium-compatible Pod CIDR (e.g., 10.0.0.0/16):
sudo kubeadm init --pod-network-cidr=10.0.0.0/16
Important: Save the kubeadm join ...
command for future use
Expected Output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f <podnetwork>" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by running the following command on each as root:
kubeadm join 192.168.1.100:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef
Please note that the token is valid for 24 hours.
Step 7: Set Up kubectl Access for Your User
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternative for root user:
export KUBECONFIG=/etc/kubernetes/admin.conf
Step 8: Install Cilium CNI
Install Cilium CLI (Optional, but Recommended)
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvf cilium-linux-amd64.tar.gz -C /usr/local/bin
cilium version
Deploy Cilium to the Cluster
cilium install
What Cilium Provides:
- eBPF-powered networking for high performance
- Advanced observability with Hubble
- Security features like network policies
- Transparent encryption and load balancing
Step 9: Allow Scheduling on Control Plane Node (Single Node Fix)
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
Why this is needed: By default, control plane nodes are tainted to prevent regular pods from scheduling. In a single-node setup, we need to remove this taint.
Step 10: Verify Everything
Check Node Status
kubectl get nodes
Expected Output:
NAME STATUS ROLES AGE VERSION
kube1 Ready control-plane 4m18s v1.30.13
Check Pod Status
kubectl get pods -A
Expected Output:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-bv76r 1/1 Running 0 5m46s
kube-system cilium-envoy-l8vzw 1/1 Running 0 5m46s
kube-system cilium-operator-585fb46f8b-ll5s6 1/1 Running 0 5m46s
kube-system coredns-55cb58b774-4xrd7 1/1 Running 0 8m24s
kube-system coredns-55cb58b774-nps8w 1/1 Running 0 8m24s
kube-system etcd-kube1 1/1 Running 0 8m40s
kube-system kube-apiserver-kube1 1/1 Running 0 8m45s
kube-system kube-controller-manager-kube1 1/1 Running 0 8m39s
kube-system kube-proxy-t94bp 1/1 Running 0 8m24s
kube-system kube-scheduler-kube1 1/1 Running 0 8m43s
Check Cilium Status
cilium status
Expected Output:
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-envoy Running: 1
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 2/2 managed by Cilium
Helm chart version: 1.17.2
Image versions cilium quay.io/cilium/cilium:v1.17.2@sha256:3c4c9932b5d8368619cb922a497ff2ebc8def5f41c18e410bcc84025fcd385b1: 1
cilium-envoy quay.io/cilium/cilium-envoy:v1.31.5-1741765102-efed3defcc70ab5b263a0fc44c93d316b846a211@sha256:377c78c13d2731f3720f931721ee309159e782d882251709cb0fac3b42c03f4b: 1
cilium-operator quay.io/cilium/operator-generic:v1.17.2@sha256:81f2d7198366e8dec2903a3a8361e4c68d47d19c68a0d42f0b7b6e3f0523f249: 1
Storage Configuration
Check Storage Classes
kubectl get storageclass
You should find this:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
standard (default) kubernetes.io/... Delete Immediate true ...
If No Storage Class Present
Mark Existing StorageClass as Default
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Install Local Path Provisioner (Non-Production)
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Advanced Configuration
Custom Cilium Configuration
Create a custom Cilium configuration file:
# cilium-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cilium-config
namespace: kube-system
data:
# Enable Hubble for observability
enable-hubble: "true"
hubble-listen-address: ":4244"
# Enable transparent encryption
enable-encryption: "false"
# Configure IPAM
ipam-mode: "kubernetes"
# Enable bandwidth manager
enable-bandwidth-manager: "true"
Apply the configuration:
kubectl apply -f cilium-config.yaml
cilium upgrade
Enable Hubble for Observability
# Enable Hubble
cilium hubble enable
# Port forward Hubble UI
cilium hubble ui
# Check Hubble status
cilium hubble status
Network Policies
Create a basic network policy:
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress: []
Apply the policy:
kubectl apply -f network-policy.yaml
Troubleshooting
Common Issues and Solutions
Issue: "The connection to the server localhost:8080 was refused"
Solution: Check if kubectl is configured correctly
kubectl config view
export KUBECONFIG=/etc/kubernetes/admin.conf
Issue: "coredns pods are in Pending state"
Solution: Check if CNI is properly installed
kubectl get pods -n kube-system
cilium status
Issue: "Failed to create pod sandbox"
Solution: Check containerd status
sudo systemctl status containerd
sudo journalctl -u containerd
Issue: "Insufficient memory"
Solution: Check system resources
free -h
df -h
Debug Commands
# Check cluster info
kubectl cluster-info
# Check component status
kubectl get componentstatuses
# Check events
kubectl get events --sort-by='.lastTimestamp'
# Check logs
kubectl logs -n kube-system cilium-operator-xxx
Production Considerations
Security Best Practices
- RBAC Configuration: Set up proper role-based access control
- Network Policies: Implement network segmentation
- Pod Security Standards: Enable pod security admission
- Regular Updates: Keep Kubernetes and Cilium updated
Monitoring and Logging
- Prometheus Integration: Set up metrics collection
- Grafana Dashboards: Visualize cluster metrics
- Centralized Logging: Aggregate logs from all components
- Alerting: Set up proactive monitoring
Backup and Recovery
- etcd Backup: Regular backup of cluster state
- Configuration Backup: Version control all configurations
- Disaster Recovery Plan: Document recovery procedures
Alternative CNI Options
Flannel
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
Calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Weave Net
kubectl apply -f "https://github.com/weaveworks/weave/releases/download/latest_release/weave-daemonset-k8s.yaml"
Testing Your Cluster
Deploy a Test Application
# Deploy nginx
kubectl create deployment nginx --image=nginx
# Expose the service
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginx
# Test connectivity
kubectl run test-pod --image=busybox --rm -it --restart=Never -- wget -O- nginx
Test Cilium Features
# Test connectivity between pods
kubectl run test-pod --image=busybox --rm -it --restart=Never -- sh
# Inside the pod, test DNS
nslookup kubernetes.default
# Test network policies
kubectl run test-pod --image=busybox --rm -it --restart=Never -- wget -O- nginx
Conclusion
Congratulations! You've successfully set up a single-node Kubernetes cluster on Ubuntu using kubeadm and supercharged it with Cilium CNI for next-gen networking and observability. Whether you're a developer, DevOps engineer, or just curious about K8s, this setup provides a fantastic local playground.
Key Takeaways
- Complete Setup: Full Kubernetes cluster with Cilium CNI
- eBPF Networking: Next-generation networking capabilities
- Local Development: Perfect environment for learning and testing
- Production Ready: Foundation for production deployments
Next Steps
Now that you have a working cluster, explore these advanced topics:
- Deploy Applications: Try deploying real applications
- Set Up Monitoring: Install Prometheus and Grafana
- Configure Ingress: Set up NGINX Ingress Controller
- Enable Security: Implement network policies and RBAC
- Scale Your Knowledge: Learn about multi-node clusters
Tags: #Kubernetes #DevOps #Linux #Cilium #K8sLab #ContainerOrchestration #Networking #eBPF