k3s on Unraid — VM Setup Guide
Bash + PowerShell Edition
A complete guide for deploying two Ubuntu Server VMs on Unraid using cloud-init images and bootstrapping a 2-node k3s Kubernetes cluster.
Prerequisites
- Unraid 6.x with VM Manager enabled (Settings → VM Manager → Enable VMs: Yes)
- 16 GB RAM (10 GB available for VMs after Unraid + Docker)
- CPU: Ryzen 5 6c/12t
- Network: default bridge (br0)
- SSH/terminal access to Unraid
- Windows workstation with PowerShell and OpenSSH
Phase 1: Generate SSH Key Pair
Run this on your Windows workstation in PowerShell.
PowerShell:
# Create .ssh directory (ignore error if it already exists)
mkdir "$HOME\.ssh" -ErrorAction SilentlyContinue
# Generate the key pair
ssh-keygen -t ed25519 -C "k3s-lab" -f "$HOME\.ssh\k3s_lab"
# Display the public key — copy this for the cloud-init configs
cat "$HOME\.ssh\k3s_lab.pub"
Copy the public key output. You’ll paste it into the cloud-init user-data files in Phase 3.
Phase 2: Download the Ubuntu Cloud Image
Cloud images are pre-built minimal Ubuntu installs designed for automated provisioning — much lighter than a full ISO.
Bash (Unraid terminal):
mkdir -p /mnt/cache/isos/cloud-images
cd /mnt/cache/isos/cloud-images
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Note: The
.imgfile is actually qcow2 format despite the extension. Unraid handles it fine.
Phase 3: Create Cloud-Init Configuration Files
Cloud-init reads config files from a small ISO attached to the VM. The files must have no file extension — just user-data, meta-data, and network-config.
3.1 — Server Node Configs
Bash (Unraid terminal):
mkdir -p /mnt/cache/isos/cloud-init/k3s-server
user-data — save to /mnt/cache/isos/cloud-init/k3s-server/user-data:
#cloud-config
hostname: k3s-server
manage_etc_hosts: true
fqdn: k3s-server.local
users:
- name: timmy
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
lock_passwd: false
ssh_authorized_keys:
- PASTE_YOUR_PUBLIC_KEY_HERE
chpasswd:
list: |
timmy:changeme123
expire: false
package_update: true
package_upgrade: true
packages:
- qemu-guest-agent
- curl
- open-iscsi
- nfs-common
- net-tools
- htop
runcmd:
- systemctl enable --now qemu-guest-agent
power_state:
mode: reboot
condition: true
meta-data — save to /mnt/cache/isos/cloud-init/k3s-server/meta-data:
instance-id: k3s-server-001
local-hostname: k3s-server
network-config — save to /mnt/cache/isos/cloud-init/k3s-server/network-config:
version: 2
ethernets:
enp1s0:
dhcp4: true
Edit the user-data file to paste your public key:
nano /mnt/cache/isos/cloud-init/k3s-server/user-data
3.2 — Agent Node Configs
Bash (Unraid terminal):
mkdir -p /mnt/cache/isos/cloud-init/k3s-agent
Create the same three files with these changes:
hostname: k3s-agentfqdn: k3s-agent.localinstance-id: k3s-agent-001local-hostname: k3s-agent
Don’t forget to paste your public key and edit:
nano /mnt/cache/isos/cloud-init/k3s-agent/user-data
Phase 4: Build Cloud-Init ISOs
Unraid doesn’t ship genisoimage or mkisofs, so we use a throwaway Docker container. It installs cdrkit, builds the ISO, and cleans itself up.
Important: Use
/mnt/cache/paths, not/mnt/user/. Docker containers can’t see through Unraid’s fuse-based/mnt/user/virtual filesystem.
Bash (Unraid terminal):
# Build server ISO
docker run --rm -v /mnt/cache/isos/cloud-init:/data alpine:latest sh -c \
"apk add --no-cache cdrkit && \
genisoimage -output /data/k3s-server-cidata.iso \
-volid cidata -joliet -rock \
/data/k3s-server/user-data \
/data/k3s-server/meta-data \
/data/k3s-server/network-config"
# Build agent ISO
docker run --rm -v /mnt/cache/isos/cloud-init:/data alpine:latest sh -c \
"apk add --no-cache cdrkit && \
genisoimage -output /data/k3s-agent-cidata.iso \
-volid cidata -joliet -rock \
/data/k3s-agent/user-data \
/data/k3s-agent/meta-data \
/data/k3s-agent/network-config"
Verify the ISOs were created:
ls -la /mnt/cache/isos/cloud-init/*.iso
The volume ID must be
cidata— cloud-init looks for this exact label.
Phase 5: Create VM Disk Images
Create thin-provisioned 20 GB disks backed by the cloud image.
Bash (Unraid terminal):
mkdir -p /mnt/user/domains/k3s-server
mkdir -p /mnt/user/domains/k3s-agent
# Server disk
qemu-img create -f qcow2 \
-b /mnt/cache/isos/cloud-images/noble-server-cloudimg-amd64.img \
-F qcow2 /mnt/user/domains/k3s-server/vdisk1.img 20G
# Agent disk
qemu-img create -f qcow2 \
-b /mnt/cache/isos/cloud-images/noble-server-cloudimg-amd64.img \
-F qcow2 /mnt/user/domains/k3s-agent/vdisk1.img 20G
Phase 6: Create the VMs in Unraid
6.1 — CPU Thread Pairing
Verify your thread pairing before pinning:
cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -u
Ryzen 5 6c/12t pairs with offset of 6:
| Core | Threads |
|---|---|
| Core 0 | 0, 6 |
| Core 1 | 1, 7 |
| Core 2 | 2, 8 |
| Core 3 | 3, 9 |
| Core 4 | 4, 10 |
| Core 5 | 5, 11 |
6.2 — CPU Pinning Plan
Core 0 (threads 0,6) → Unraid OS + Docker (reserved — never assign to VMs)
Core 1 (threads 1,7) → k3s-server VM
Core 2 (threads 2,8) → k3s-agent VM
Cores 3-5 (threads 3,9,4,10,5,11) → Free for Docker containers
6.3 — Server Node VM
In Unraid web UI: VMs → Add VM → Linux
| Setting | Value |
|---|---|
| Name | k3s-server |
| CPU Mode | Host Passthrough |
| Logical CPUs | Check threads 1 and 7 |
| Initial Memory | 3072 MB (3 GB) |
| Max Memory | 3072 MB |
| Machine | Q35-9.0 (latest available) |
| BIOS | SeaBIOS |
| OS Install ISO | (leave empty) |
| Primary vDisk | Manual: /mnt/user/domains/k3s-server/vdisk1.img |
| Primary vDisk Bus | VirtIO |
| CD-ROM ISO | /mnt/user/isos/cloud-init/k3s-server-cidata.iso |
| CD-ROM Bus | SATA |
| Network Bridge | br0 |
| Network Model | virtio-net |
6.4 — Agent Node VM
Same settings with these changes:
| Setting | Value |
|---|---|
| Name | k3s-agent |
| Logical CPUs | Check threads 2 and 8 |
| Primary vDisk | Manual: /mnt/user/domains/k3s-agent/vdisk1.img |
| CD-ROM ISO | /mnt/user/isos/cloud-init/k3s-agent-cidata.iso |
Phase 7: Boot and Verify the VMs
7.1 — Start Both VMs
Start k3s-server first, then k3s-agent. Cloud-init takes 2-3 minutes on first boot.
7.2 — Find VM IP Addresses
Bash (Unraid terminal):
virsh domifaddr k3s-server
virsh domifaddr k3s-agent
Alternative subnet scan:
nmap -sn 192.168.1.0/24
Set DHCP reservations in your UniFi controller once you have IPs. Kubernetes doesn’t handle node IP changes gracefully.
7.3 — SSH In and Verify
PowerShell (Windows workstation):
ssh -i "$HOME\.ssh\k3s_lab" timmy@<k3s-server-ip>
Bash (inside the VM):
cloud-init status --long
# Should show: status: done
hostname
ping <k3s-agent-ip>
7.4 — Remove Cloud-Init ISO (Optional)
- Stop the VM
- Edit VM → Remove CD-ROM ISO path (set to “None”)
- Start again
Phase 8: Install k3s
8.1 — Install k3s Server
PowerShell → SSH in:
ssh -i "$HOME\.ssh\k3s_lab" timmy@<k3s-server-ip>
Bash (inside k3s-server VM):
curl -sfL https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--tls-san <k3s-server-ip> \
--node-name k3s-server
Flags:
| Flag | Purpose |
|---|---|
--write-kubeconfig-mode 644 |
Makes kubeconfig readable without sudo |
--tls-san <k3s-server-ip> |
Adds IP as SAN on API server cert for remote access |
--node-name k3s-server |
Sets explicit node name |
Verify and grab the join token:
sudo systemctl status k3s
kubectl get nodes
sudo cat /var/lib/rancher/k3s/server/node-token
8.2 — Install k3s Agent
PowerShell → SSH in:
ssh -i "$HOME\.ssh\k3s_lab" timmy@<k3s-agent-ip>
Bash (inside k3s-agent VM):
curl -sfL https://get.k3s.io | K3S_URL=https://<k3s-server-ip>:6443 \
K3S_TOKEN=<paste-your-token-here> \
sh -s - agent \
--node-name k3s-agent
8.3 — Verify the Cluster
From the server node:
kubectl get nodes
Expected:
NAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,master 5m v1.31.x+k3s1
k3s-agent Ready <none> 1m v1.31.x+k3s1
Phase 9: Configure kubectl on Windows
9.1 — Copy Kubeconfig
PowerShell:
mkdir "$HOME\.kube" -ErrorAction SilentlyContinue
scp -i "$HOME\.ssh\k3s_lab" timmy@<k3s-server-ip>:/etc/rancher/k3s/k3s.yaml "$HOME\.kube\k3s-config"
9.2 — Update Server Address
PowerShell:
(Get-Content "$HOME\.kube\k3s-config") -replace '127.0.0.1', '<k3s-server-ip>' | Set-Content "$HOME\.kube\k3s-config"
9.3 — Set KUBECONFIG
PowerShell (current session):
$env:KUBECONFIG = "$HOME\.kube\k3s-config"
PowerShell (persist across sessions):
[System.Environment]::SetEnvironmentVariable('KUBECONFIG', "$HOME\.kube\k3s-config", 'User')
Restart PowerShell after setting the persistent variable.
9.4 — Verify
PowerShell:
kubectl get nodes
kubectl get pods -A
Phase 10: Smoke Test
10.1 — Deploy nginx
PowerShell:
kubectl create deployment nginx --image=nginx:alpine --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
10.2 — Verify
PowerShell:
kubectl get pods -o wide
kubectl get svc nginx
The service output shows something like 80:31234/TCP — the number after the colon is your NodePort.
Invoke-WebRequest http://<k3s-server-ip>:<nodeport>
Invoke-WebRequest http://<k3s-agent-ip>:<nodeport>
10.3 — Clean Up
kubectl delete deployment nginx
kubectl delete svc nginx
Troubleshooting
Cloud-init didn’t run / VM won’t boot
- Verify ISO volume label is exactly
cidata(case-sensitive) - Config files must have no extension —
user-datanotuser-data.yml user-datamust start with#cloud-configon the very first line- Use Unraid’s VNC console to see boot messages
Docker can’t see cloud-init files
- Use
/mnt/cache/paths, not/mnt/user/ - Docker can’t see through Unraid’s fuse filesystem
- Verify files exist:
ls -la /mnt/cache/isos/cloud-init/k3s-server/
Nodes not joining
- Ensure both VMs can ping each other
- Check port 6443 is open:
sudo ss -tlnp | grep 6443 - Verify the token has no extra whitespace
- Check agent logs:
sudo journalctl -u k3s-agent -f
kubectl shows NotReady
- Wait 30-60 seconds for CNI setup
- Check system pods:
kubectl get pods -n kube-system - Check flannel:
sudo journalctl -u k3s -f | grep flannel
kubectl on Windows can’t connect
- Verify env var is set:
echo $env:KUBECONFIG - Check server address was updated:
cat "$HOME\.kube\k3s-config" - Should show
https://<k3s-server-ip>:6443, not127.0.0.1 - Restart PowerShell after setting persistent env var
Resource Summary
| Component | vCPUs (threads) | RAM | Disk |
|---|---|---|---|
| Unraid + Docker | 0, 6 (reserved) | ~4 GB | — |
| k3s-server | 1, 7 | 3 GB | 20 GB thin |
| k3s-agent | 2, 8 | 3 GB | 20 GB thin |
| Free (Docker) | 3,9,4,10,5,11 | ~6 GB | — |
| Total | 12 threads | 16 GB | — |
Next Steps
- Helm — Install Helm and deploy charts
- Persistent storage — Set up NFS from Unraid as PersistentVolumes
- Ingress — k3s ships with Traefik; configure ingress routes
- GitOps — Install ArgoCD or Flux for declarative deployments
- Monitoring — Deploy Prometheus + Grafana via Helm
- Tailscale — Access your cluster remotely via existing Tailscale setup