Setting Up Kubernetes at Home: Complete K3s Guide for Self-Hosters
Want to run a production-grade Kubernetes cluster on your home server? K3s makes it possible—without the complexity and resource overhead of full Kubernetes.
In this guide, you’ll learn how to deploy K3s, configure storage, set up ingress, and run your first applications. Whether you’re learning DevOps skills or building a serious home lab, K3s is the perfect starting point.
What is K3s and Why Use It?
K3s is a lightweight, certified Kubernetes distribution designed for resource-constrained environments. Created by Rancher (now part of SUSE), it packages everything you need into a single 50MB binary.
K3s vs Full Kubernetes
| Feature | Full Kubernetes | K3s |
|---|---|---|
| Binary size | ~1GB | ~50MB |
| Memory usage | 2-4GB minimum | 512MB-1GB |
| Dependencies | Many (etcd, etc.) | Single binary |
| Setup time | Hours | 5 minutes |
| Features | Full feature set | Production-ready subset |
K3s removes or simplifies:
- Legacy/cloud-specific features
- In-tree storage drivers (uses external)
- Alpha features
- Non-critical admission controllers
What you still get:
- Full API compatibility
- kubectl support
- Helm charts work perfectly
- Production-ready features (RBAC, ingress, load balancing)
Why Run Kubernetes at Home?
Learning: Master the most in-demand DevOps skill
Resume boost: Real Kubernetes experience beats online tutorials
Powerful automation: Declarative configs, auto-scaling, self-healing
Skill transfer: Same commands work on AWS EKS, GKE, AKS
Future-proof: Industry standard for container orchestration
Prerequisites
Hardware Requirements
Minimum (single node):
- 2 CPU cores
- 2GB RAM
- 20GB storage
- Ubuntu 22.04 LTS (or similar)
Recommended (multi-node cluster):
- 3+ nodes (HA setup)
- 4GB+ RAM per node
- Fast storage (SSD/NVMe)
- Gigabit networking
Good starter hardware:
- 3x Raspberry Pi 4 (4GB RAM)
- 2x Intel N100 mini PCs
- 1x AMD Ryzen mini PC (can run multiple nodes via VMs)
Software Prerequisites
# Update system
sudo apt update && sudo apt upgrade -y
# Install required packages
sudo apt install -y curl wget
# Disable swap (required for Kubernetes)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
Installing K3s (Single Node)
Quick Install
The fastest way to get started:
# Install K3s server
curl -sfL https://get.k3s.io | sh -
# Check status
sudo systemctl status k3s
# Verify cluster
sudo k3s kubectl get nodes
That’s it. You now have a working Kubernetes cluster.
Configure kubectl
K3s includes kubectl, but you’ll want to use it without sudo:
# Create kubeconfig directory
mkdir -p ~/.kube
# Copy K3s config
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
# Fix permissions
sudo chown $USER:$USER ~/.kube/config
chmod 600 ~/.kube/config
# Test
kubectl get nodes
You should see your node in Ready state.
Understanding K3s Components
K3s automatically installs:
- Traefik (ingress controller)
- ServiceLB (load balancer for bare metal)
- Local path provisioner (storage)
- CoreDNS (cluster DNS)
- Metrics Server (resource monitoring)
Check running pods:
kubectl get pods -A
Building a Multi-Node Cluster
Setting Up the Control Plane
On your first node (master):
# Install with static token
curl -sfL https://get.k3s.io | K3S_TOKEN=your-secret-token sh -s - server \
--disable traefik \
--write-kubeconfig-mode 644
# Get the node IP
ip addr show
Note the server IP (e.g., 192.168.1.100).
Adding Worker Nodes
On each additional node:
# Join the cluster
curl -sfL https://get.k3s.io | K3S_TOKEN=your-secret-token \
K3S_URL=https://192.168.1.100:6443 sh -
Verify cluster:
kubectl get nodes
You should see all nodes listed.
High Availability Setup (Advanced)
For production-grade HA with 3+ masters:
# Install first master with embedded etcd
curl -sfL https://get.k3s.io | K3S_TOKEN=your-secret-token sh -s - server \
--cluster-init \
--disable traefik
# Join additional masters
curl -sfL https://get.k3s.io | K3S_TOKEN=your-secret-token sh -s - server \
--server https://192.168.1.100:6443
This creates a true HA cluster with distributed control plane.
Storage Configuration
Default Local Path Storage
K3s includes a basic storage provisioner:
# test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubectl apply -f test-pvc.yaml
kubectl get pvc
Limitation: Only works on single node. Pods can’t migrate to other nodes.
Better Option: Longhorn
Longhorn provides distributed storage with replication and snapshots.
# Install Longhorn
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.5.3/deploy/longhorn.yaml
# Wait for installation
kubectl -n longhorn-system get pods -w
# Access UI (port-forward)
kubectl -n longhorn-system port-forward svc/longhorn-frontend 8080:80
Open http://localhost:8080 to manage storage.
Set Longhorn as default storage class:
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Now PVCs automatically use Longhorn.
Ingress Setup
Using Traefik (Default)
K3s includes Traefik, but let’s configure it properly:
# whoami-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
selector:
app: whoami
ports:
- port: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
spec:
rules:
- host: whoami.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami
port:
number: 80
kubectl apply -f whoami-app.yaml
# Test (add to /etc/hosts: 192.168.1.100 whoami.local)
curl http://whoami.local
SSL with cert-manager
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3/cert-manager.yaml
# Wait for pods
kubectl -n cert-manager get pods -w
Create Let’s Encrypt issuer:
# letsencrypt-prod.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: traefik
kubectl apply -f letsencrypt-prod.yaml
Update ingress for HTTPS:
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- whoami.yourdomain.com
secretName: whoami-tls
Deploying Your First Application
Example: Self-Hosted Uptime Kuma
# uptime-kuma.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: uptime-kuma-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptime-kuma
spec:
replicas: 1
selector:
matchLabels:
app: uptime-kuma
template:
metadata:
labels:
app: uptime-kuma
spec:
containers:
- name: uptime-kuma
image: louislam/uptime-kuma:1
ports:
- containerPort: 3001
volumeMounts:
- name: data
mountPath: /app/data
volumes:
- name: data
persistentVolumeClaim:
claimName: uptime-kuma-data
---
apiVersion: v1
kind: Service
metadata:
name: uptime-kuma
spec:
selector:
app: uptime-kuma
ports:
- port: 3001
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uptime-kuma
spec:
rules:
- host: uptime.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: uptime-kuma
port:
number: 3001
kubectl apply -f uptime-kuma.yaml
# Check status
kubectl get pods
kubectl get ingress
Access at http://uptime.yourdomain.com
Essential Management Tools
K9s (Interactive CLI)
The best way to manage Kubernetes clusters.
# Install K9s
curl -sS https://webi.sh/k9s | sh
# Run
k9s
Navigate with arrow keys, press 0 to see all namespaces.
Helm (Package Manager)
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Example: Install Grafana
helm repo add grafana https://grafana.github.io/helm-charts
helm install grafana grafana/grafana
Lens (Desktop GUI)
Download from https://k8slens.dev
Connect to your cluster via the kubeconfig file.
Resource Management
Set Resource Limits
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
Requests: Guaranteed resources
Limits: Maximum allowed
Node Affinity
Pin workloads to specific nodes:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-with-gpu
Backup & Disaster Recovery
Backup etcd
K3s uses SQLite by default (backed up automatically). For etcd clusters:
# Create snapshot
sudo k3s etcd-snapshot save
# List snapshots
sudo k3s etcd-snapshot ls
# Restore from snapshot
sudo k3s server --cluster-reset --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/snapshot.db
Velero for Application Backups
# Install Velero
velero install --provider aws --use-volume-snapshots=false
# Backup entire namespace
velero backup create my-backup --include-namespaces default
Monitoring
Prometheus + Grafana
# Add Prometheus repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# Install kube-prometheus-stack (includes Grafana)
helm install prometheus prometheus-community/kube-prometheus-stack
# Access Grafana
kubectl port-forward svc/prometheus-grafana 3000:80
Default login: admin / prom-operator
Pre-configured dashboards show cluster health, resource usage, and application metrics.
Common Issues & Solutions
Pod stuck in “Pending”
# Check events
kubectl describe pod <pod-name>
Common causes:
- Insufficient resources
- No matching node selector
- PVC not bound
Traefik not routing
# Check Traefik logs
kubectl -n kube-system logs -l app.kubernetes.io/name=traefik
Ensure your DNS/hosts file points to the cluster IP.
Longhorn volumes not attaching
Ensure open-iscsi is installed on all nodes:
sudo apt install open-iscsi -y
sudo systemctl enable --now iscsid
Best Practices
- Use namespaces to organize applications
- Set resource limits on all deployments
- Enable RBAC for multi-user access
- Backup etcd regularly (automated snapshots)
- Monitor resource usage (Prometheus/Grafana)
- Use Helm charts instead of raw YAML when possible
- Label everything for easy filtering
- Test updates on non-production namespace first
Popular Self-Hosted Apps for K3s
Media:
- Plex / Jellyfin
- Sonarr / Radarr
- Nextcloud
Monitoring:
- Uptime Kuma
- Grafana / Prometheus
- Netdata
Development:
- GitLab
- Harbor (container registry)
- Gitea
Productivity:
- Nextcloud
- Paperless-ngx
- Outline wiki
Home Automation:
- Home Assistant (requires special config)
- Node-RED
Should You Use K3s at Home?
Use K3s if:
- You want to learn Kubernetes
- You run 10+ containerized services
- You need high availability
- You want declarative infrastructure
- You’re preparing for DevOps/SRE roles
Stick with Docker Compose if:
- You run <5 services
- You’re comfortable with your current setup
- You don’t need HA or auto-scaling
- Simplicity is more important than features
The truth: K3s has a steeper learning curve. But once configured, it’s more powerful and maintainable than Docker Compose for complex setups.
Next Steps
- Deploy your first app (try Uptime Kuma above)
- Add more nodes for HA
- Set up monitoring (Prometheus + Grafana)
- Configure automated backups (Velero)
- Migrate Docker Compose apps to K3s
- Explore GitOps (ArgoCD for automated deployments)
Conclusion
K3s brings enterprise-grade container orchestration to your home lab—without the complexity of full Kubernetes. With a 5-minute install and powerful features, it’s perfect for learning DevOps skills or running production-grade self-hosted applications.
Start with a single node, deploy a few apps, and expand as you learn. Before you know it, you’ll be running a highly available cluster that rivals cloud infrastructure.
Ready to take your self-hosting to the next level? K3s is waiting.
Further Reading:
Note: K3s is a trademark of SUSE. This is an independent tutorial not affiliated with SUSE or Rancher.