diff options
| author | Paul Buetow <paul@buetow.org> | 2026-03-18 22:58:42 +0200 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2026-03-18 22:58:42 +0200 |
| commit | 53da530eeb5016ec064d231d6be7aba08bd844d7 (patch) | |
| tree | b075a40aa6ed31f9422d19f82b1c0135158482f2 /prompts | |
| parent | 22d546124009c5907145318647cafd1659e2bc0b (diff) | |
f3: add fourth host, migrate FreeBSD VM from f0, update docs, zrepl, apcupsd, WireGuard
Diffstat (limited to 'prompts')
| -rw-r--r-- | prompts/skills/f3s/SKILL.md | 7 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/freebsd-setup.md | 35 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/hardware.md | 3 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/rocky-linux-vms.md | 57 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/storage.md | 66 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/ups-power.md | 8 | ||||
| -rw-r--r-- | prompts/skills/f3s/references/wireguard.md | 31 |
7 files changed, 189 insertions, 18 deletions
diff --git a/prompts/skills/f3s/SKILL.md b/prompts/skills/f3s/SKILL.md index 2d5f806..33769c2 100644 --- a/prompts/skills/f3s/SKILL.md +++ b/prompts/skills/f3s/SKILL.md @@ -1,11 +1,11 @@ --- name: f3s -description: Reference skill for the f3s homelab—three Beelink S12 Pro hosts (f0/f1/f2) running FreeBSD with Rocky Linux Bhyve VMs (r0/r1/r2) and a k3s Kubernetes cluster. Use when troubleshooting or making configuration decisions for the f3s setup. +description: Reference skill for the f3s homelab—four Beelink S12 Pro hosts (f0/f1/f2/f3) running FreeBSD with Rocky Linux Bhyve VMs and a k3s Kubernetes cluster. f0/f1/f2 run r0/r1/r2 k3s nodes; f3 is standalone bhyve only (not part of k3s). Use when troubleshooting or making configuration decisions for the f3s setup. --- # f3s Homelab Reference -**f3s** = **f**reeBSD + **k3s**. Three physical Beelink S12 Pro mini-PCs (Intel N100) running FreeBSD as the base OS, each hosting a Rocky Linux 9 bhyve VM, forming a 3-node HA k3s Kubernetes cluster. +**f3s** = **f**reeBSD + **k3s**. Four physical Beelink S12 Pro mini-PCs (Intel N100) running FreeBSD as the base OS. f0/f1/f2 each host a Rocky Linux 9 bhyve VM forming a 3-node HA k3s Kubernetes cluster. f3 is a standalone host for bhyve VMs only — not part of the k3s cluster. ## When to Use @@ -20,7 +20,7 @@ Detailed reference documentation is in the `references/` subfolder: - [Hardware](references/hardware.md) — Beelink S12 Pro specs, network switch, IPs, MAC addresses, Wake-on-LAN - [FreeBSD Setup](references/freebsd-setup.md) — Base OS install, packages, ZFS snapshots, configuration - [UPS & Power](references/ups-power.md) — APC BX750MI, apcupsd config on f0/f1/f2 -- [Rocky Linux VMs](references/rocky-linux-vms.md) — Bhyve, vm-bhyve, VM config, NVMe disk fix +- [Rocky Linux VMs](references/rocky-linux-vms.md) — Bhyve, vm-bhyve, VM config, NVMe disk fix; FreeBSD VM on f3 (migrated from f0) - [WireGuard Mesh](references/wireguard.md) — Mesh topology, IP assignments, peer configs - [Storage](references/storage.md) — ZFS (zdata), CARP, NFS over stunnel, zrepl replication - [k3s Setup](references/k3s-setup.md) — HA k3s cluster, etcd, node IPs, kubeconfig, ArgoCD @@ -33,6 +33,7 @@ Detailed reference documentation is in the `references/` subfolder: | f0 | FreeBSD host | 192.168.1.130 | 192.168.2.130 | | f1 | FreeBSD host | 192.168.1.131 | 192.168.2.131 | | f2 | FreeBSD host | 192.168.1.132 | 192.168.2.132 | +| f3 | FreeBSD host (standalone bhyve, not k3s) | 192.168.1.133 | 192.168.2.133 | | r0 | Rocky Linux VM on f0 | 192.168.1.120 | 192.168.2.120 | | r1 | Rocky Linux VM on f1 | 192.168.1.121 | 192.168.2.121 | | r2 | Rocky Linux VM on f2 | 192.168.1.122 | 192.168.2.122 | diff --git a/prompts/skills/f3s/references/freebsd-setup.md b/prompts/skills/f3s/references/freebsd-setup.md index d2712fc..d55fb8f 100644 --- a/prompts/skills/f3s/references/freebsd-setup.md +++ b/prompts/skills/f3s/references/freebsd-setup.md @@ -49,7 +49,37 @@ doas vm start rocky # kubectl get nodes (from laptop — node should be Ready) ``` -Breaking changes in 15.0 to watch for: +## Slow SSH Login / DNS Troubleshooting + +If SSH logins take ~30 seconds, the cause is reverse DNS lookup timing out. Root cause on bhyve VMs: SLAAC (Router Advertisement RDNSS) injects an unreachable IPv6 nameserver into `/etc/resolv.conf` via `resolvconf`, and it's queried first. + +### Proper fix: `/etc/resolvconf.conf` + +Mark the VM's NIC as `private_interfaces` so SLAAC DNS is not added globally, and pin the IPv4 nameserver: + +```sh +# On bhyve VMs — interface name is typically vtnet0 +cat <<EOF | doas tee /etc/resolvconf.conf +# Statically configure nameserver; mark vtnet0 as private so SLAAC-provided +# IPv6 DNS (from Router Advertisement RDNSS) is not added globally. +name_servers="192.168.1.1" +private_interfaces="vtnet0" +EOF +doas resolvconf -u # regenerate /etc/resolv.conf immediately +``` + +On **FreeBSD hosts** (f0–f3), the interface is `re0` instead of `vtnet0`. + +### Belt-and-suspenders: disable reverse DNS in sshd + +```sh +# In /etc/ssh/sshd_config: set UseDNS no +doas service sshd restart +``` + +This was observed on `freebsd.lan` (FreeBSD bhyve VM on f3): `/etc/resolv.conf` had only `fd22:c702:acb7::1` (IPv6, unreachable), causing a 30-second DNS timeout on every SSH login. + +## Breaking Changes in 15.0 to Watch For - **bhyve PCI BARs**: if VM fails to boot, add `pci.enable_bars='true'` to `/zroot/bhyve/rocky/rocky.conf` - **NFS privileged ports**: FreeBSD 15.0 sets `vfs.nfsd.nfs_privport=1` by default, blocking NFS clients connecting via stunnel (unprivileged ports). Fix: add `vfs.nfsd.nfs_privport=0` to `/etc/sysctl.conf` on each f-host, then `doas sysctl vfs.nfsd.nfs_privport=0` to apply immediately, and remount NFS on r-hosts with `mount -a`. - **WireGuard interface address**: FreeBSD 15.0 requires a prefix length when setting interface addresses. Add `/32` to IPv4 `Address` lines in `/usr/local/etc/wireguard/wg0.conf` (e.g. `Address = 192.168.2.130/32`). Without this, `service wireguard start` fails with "setting interface address without mask is no longer supported". @@ -58,11 +88,12 @@ Current version: **FreeBSD 15.0-RELEASE** (as of Part 8, upgraded from 14.3). ## /etc/hosts -All three FreeBSD hosts and Rocky VMs are in `/etc/hosts` on each node: +All four FreeBSD hosts, Rocky VMs, and the FreeBSD bhyve VM on f3 are in `/etc/hosts` on each node: ``` 192.168.1.130 f0 f0.lan f0.lan.buetow.org 192.168.1.131 f1 f1.lan f1.lan.buetow.org 192.168.1.132 f2 f2.lan f2.lan.buetow.org +192.168.1.133 f3 f3.lan f3.lan.buetow.org 192.168.1.120 r0 r0.lan r0.lan.buetow.org 192.168.1.121 r1 r1.lan r1.lan.buetow.org 192.168.1.122 r2 r2.lan r2.lan.buetow.org diff --git a/prompts/skills/f3s/references/hardware.md b/prompts/skills/f3s/references/hardware.md index 209f96e..fa7252e 100644 --- a/prompts/skills/f3s/references/hardware.md +++ b/prompts/skills/f3s/references/hardware.md @@ -2,7 +2,7 @@ ## Physical Nodes -Three **Beelink S12 Pro** mini-PCs with **Intel N100** CPUs. +Four **Beelink S12 Pro** mini-PCs with **Intel N100** CPUs. f0/f1/f2 each run a Rocky Linux bhyve VM as k3s nodes. f3 is a standalone bhyve host — not part of the k3s cluster. ### Specs (per node) @@ -34,6 +34,7 @@ MAC addresses: | f0 | e8:ff:1e:d7:1c:ac | | f1 | e8:ff:1e:d7:1e:44 | | f2 | e8:ff:1e:d7:1c:a0 | +| f3 | e8:ff:1e:d7:f3:d7 | BIOS requirements for WoL: enable "Wake on LAN", disable "ERP Support", enable "Power on by PCI-E". diff --git a/prompts/skills/f3s/references/rocky-linux-vms.md b/prompts/skills/f3s/references/rocky-linux-vms.md index e0700f4..c586390 100644 --- a/prompts/skills/f3s/references/rocky-linux-vms.md +++ b/prompts/skills/f3s/references/rocky-linux-vms.md @@ -10,7 +10,7 @@ Tool: **vm-bhyve** (not built into FreeBSD, installed via pkg). -### Install and initialise (run on each of f0, f1, f2) +### Install and initialise (run on each of f0, f1, f2, f3) ```sh doas pkg install vm-bhyve bhyve-firmware doas sysrc vm_enable=YES @@ -165,3 +165,58 @@ doas sockstat -4 | grep 5900 # check VNC port > doas kill -9 2086 > ``` > **Warning**: Force-killing bhyve with `kill -9` mid-write can corrupt the k3s etcd WAL on the Rocky VM, causing a crash loop on next start. Only use as a last resort, and check etcd health after. See k3s-setup.md for the recovery procedure. + +## FreeBSD Bhyve VM on f3 + +f3 hosts a standalone FreeBSD development VM (not part of k3s). It was migrated from f0 via `zfs send`. + +### VM config (`/zroot/bhyve/freebsd/freebsd.conf`) + +``` +loader="bhyveload" +cpu=4 +memory=14G +network0_type="virtio-net" +network0_switch="public" +disk0_type="nvme" +disk0_name="disk0.img" # 20GB OS disk +disk1_type="nvme" +disk1_name="disk1.img" # 100GB data disk +uuid="<unique>" +network0_mac="<unique>" +``` + +- Accessible as `freebsd.lan` (hostname inside the VM) +- Auto-starts on f3 boot: `vm_list="freebsd"` in `/etc/rc.conf` +- `zroot/bhyve/freebsd` encrypted with `f3.lan.buetow.org:bhyve.key` +- Replicated to f2 via zrepl (`f3_to_f2_freebsd` job, every 10 min → `zroot/sink/f3/zroot/bhyve/freebsd`) + +### Migration procedure (zfs send) + +```sh +# On source host — snapshot and send +doas zfs snapshot zroot/bhyve/<vmname>@migrate +ssh sourcehost 'doas zfs send zroot/bhyve/<vmname>@migrate' | ssh desthost 'doas zfs recv -u zroot/bhyve/<vmname>' + +# On dest host — mount and start +doas zfs mount zroot/bhyve/<vmname> +doas vm start <vmname> +``` + +Note: Non-raw send re-encrypts under the destination's `zroot/bhyve` key automatically. + +### Slow SSH login inside the VM + +If `ssh freebsd.lan` takes ~30 seconds, SLAAC is injecting an unreachable IPv6 DNS server. Fix via `/etc/resolvconf.conf`: + +```sh +# Inside freebsd.lan: +cat <<EOF | doas tee /etc/resolvconf.conf +name_servers="192.168.1.1" +private_interfaces="vtnet0" +EOF +doas resolvconf -u + +# Also set UseDNS no in /etc/ssh/sshd_config and restart sshd +doas service sshd restart +``` diff --git a/prompts/skills/f3s/references/storage.md b/prompts/skills/f3s/references/storage.md index 77b62e6..a43c2e8 100644 --- a/prompts/skills/f3s/references/storage.md +++ b/prompts/skills/f3s/references/storage.md @@ -11,6 +11,7 @@ Note: Original plan was HAST, replaced by **zrepl** (ZFS send/receive) — more - **f0**: 512GB M.2 (OS/zroot) + Samsung SSD 870 EVO 1TB (zdata) - **f1**: 512GB M.2 (OS/zroot) + Crucial CT1000BX500SSD1 1TB (zdata) - **f2**: No second drive (no zdata pool) +- **f3**: 512GB M.2 (OS/zroot); no zdata pool yet (planned) ## ZFS: zdata Pool Setup @@ -24,6 +25,8 @@ doas zpool create zdata ada1 # ada1 = second SSD ## ZFS Encryption Keys (USB Key Storage) Encryption keys are stored on USB flash drives (UFS-formatted, mounted at `/keys`). +All four hosts (f0/f1/f2/f3) have USB keys at `/dev/da0` mounted at `/keys`, each holding +all 8 key files as cross-host backups. ```sh # Format and mount USB key (on each node) @@ -32,15 +35,17 @@ echo '/dev/da0 /keys ufs rw 0 2' | doas tee -a /etc/fstab doas mkdir /keys doas mount /keys -# Generate keys (on f0, then copy to f1 and f2) +# Generate keys (on f0, then copy to f1, f2, f3) doas openssl rand -out /keys/f0.lan.buetow.org:bhyve.key 32 doas openssl rand -out /keys/f1.lan.buetow.org:bhyve.key 32 doas openssl rand -out /keys/f2.lan.buetow.org:bhyve.key 32 +doas openssl rand -out /keys/f3.lan.buetow.org:bhyve.key 32 doas openssl rand -out /keys/f0.lan.buetow.org:zdata.key 32 doas openssl rand -out /keys/f1.lan.buetow.org:zdata.key 32 doas openssl rand -out /keys/f2.lan.buetow.org:zdata.key 32 +doas openssl rand -out /keys/f3.lan.buetow.org:zdata.key 32 doas chown root /keys/* && doas chmod 400 /keys/* -# Copy to f1 and f2 via tarball +# Copy to f1, f2, f3 via tarball ``` ## ZFS Encryption Setup @@ -78,6 +83,10 @@ doas sysrc zfskeys_datasets="zdata/enc zdata/enc/nfsdata zroot/bhyve" # On f1 doas sysrc zfskeys_enable=YES doas sysrc zfskeys_datasets="zdata/enc zroot/bhyve zdata/sink/f0/zdata/enc/nfsdata" + +# On f3 (bhyve VMs only, no zdata pool yet) +doas sysrc zfskeys_enable=YES +doas sysrc zfskeys_datasets="zroot/bhyve" doas zfs set keylocation=file:///keys/f0.lan.buetow.org:zdata.key \ zdata/sink/f0/zdata/enc/nfsdata ``` @@ -126,11 +135,25 @@ jobs: grid: 4x7d | 6x30d regex: "^zrepl_.*" - - name: f0_to_f1_freebsd + # Note: f0_to_f1_freebsd job removed — the FreeBSD VM was migrated to f3. + # It is now replicated from f3 → f2 (see f3 zrepl config below). +``` + +### f3 configuration (push: freebsd VM → f2) + +```yaml +global: + logging: + - type: stdout + level: info + format: human + +jobs: + - name: f3_to_f2_freebsd type: push connect: type: tcp - address: "192.168.2.131:8888" + address: "192.168.2.132:8888" # f2 WireGuard IP filesystems: "zroot/bhyve/freebsd": true # development FreeBSD VM send: @@ -138,7 +161,7 @@ jobs: snapshotting: type: periodic prefix: zrepl_ - interval: 10m # every 10 minutes + interval: 10m pruning: keep_sender: - type: last_n @@ -154,6 +177,39 @@ jobs: regex: "^zrepl_.*" ``` +### f2 configuration (sink for f3's freebsd VM) + +f2 has no second drive so the sink lives in `zroot/sink`: + +```sh +doas zfs create zroot/sink +``` + +`/usr/local/etc/zrepl/zrepl.yml`: + +```yaml +global: + logging: + - type: stdout + level: info + format: human + +jobs: + - name: sink + type: sink + serve: + type: tcp + listen: "192.168.2.132:8888" # f2 WireGuard IP + clients: + "192.168.2.133": "f3" + recv: + placeholder: + encryption: inherit + root_fs: "zroot/sink" +``` + +Replicated path: `zroot/bhyve/freebsd` → `zroot/sink/f3/zroot/bhyve/freebsd` + ### f1 configuration (sink) ```sh diff --git a/prompts/skills/f3s/references/ups-power.md b/prompts/skills/f3s/references/ups-power.md index 019887f..687b784 100644 --- a/prompts/skills/f3s/references/ups-power.md +++ b/prompts/skills/f3s/references/ups-power.md @@ -41,9 +41,9 @@ apcaccess # full status apcaccess -p TIMELEFT # remaining minutes ``` -## `apcupsd` on f1 and f2 (network clients) +## `apcupsd` on f1, f2, and f3 (network clients) -`f1` and `f2` query the UPS status from `f0` over the network (port 3551). +`f1`, `f2`, and `f3` query the UPS status from `f0` over the network (port 3551). They are configured to shut down *earlier* than `f0` to avoid losing the UPS status feed. ### Config diff from sample (f1 and f2) @@ -65,10 +65,10 @@ apcaccess | grep Percent # verify ## Shutdown Order On power failure, the expected graceful shutdown sequence is: -1. **f1 and f2** — shut down first (BATTERYLEVEL 10, MINUTES 6) +1. **f1, f2, and f3** — shut down first (BATTERYLEVEL 10, MINUTES 6) 2. **f0** — shuts down last (BATTERYLEVEL 5, MINUTES 3) -This ensures f1/f2 can still reach f0's apcupsd to learn the UPS status before f0 shuts down. +This ensures f1/f2/f3 can still reach f0's apcupsd to learn the UPS status before f0 shuts down. ## Logs diff --git a/prompts/skills/f3s/references/wireguard.md b/prompts/skills/f3s/references/wireguard.md index 6ce7a82..18ea898 100644 --- a/prompts/skills/f3s/references/wireguard.md +++ b/prompts/skills/f3s/references/wireguard.md @@ -5,7 +5,7 @@ Full-mesh VPN network connecting all f3s infrastructure hosts plus two roaming clients. **Infrastructure hosts** (full mesh — every host connects to every other): -- `f0`, `f1`, `f2` — FreeBSD physical nodes (home LAN) +- `f0`, `f1`, `f2`, `f3` — FreeBSD physical nodes (home LAN) - `r0`, `r1`, `r2` — Rocky Linux Bhyve VMs - `blowfish`, `fishfinger` — OpenBSD internet gateways (OpenBSD Amsterdam and Hetzner) @@ -22,6 +22,7 @@ Even `fN <-> rN` tunnels exist (technically redundant since the VM runs on the h | f0 | 192.168.2.130 | fd42:beef:cafe:2::130 | FreeBSD host | | f1 | 192.168.2.131 | fd42:beef:cafe:2::131 | FreeBSD host | | f2 | 192.168.2.132 | fd42:beef:cafe:2::132 | FreeBSD host | +| f3 | 192.168.2.133 | fd42:beef:cafe:2::133 | FreeBSD host (standalone bhyve) | | r0 | 192.168.2.120 | fd42:beef:cafe:2::120 | Rocky VM (k3s node) | | r1 | 192.168.2.121 | fd42:beef:cafe:2::121 | Rocky VM (k3s node) | | r2 | 192.168.2.122 | fd42:beef:cafe:2::122 | Rocky VM (k3s node) | @@ -34,7 +35,7 @@ Even `fN <-> rN` tunnels exist (technically redundant since the VM runs on the h WireGuard hostnames: `<host>.wg0.wan.buetow.org` (e.g. `f0.wg0.wan.buetow.org`) -## FreeBSD Setup (f0, f1, f2) +## FreeBSD Setup (f0, f1, f2, f3) ```sh doas pkg install wireguard-tools @@ -171,6 +172,7 @@ Add to `/etc/hosts` on each host (FreeBSD and Rocky Linux): 192.168.2.130 f0.wg0 f0.wg0.wan.buetow.org 192.168.2.131 f1.wg0 f1.wg0.wan.buetow.org 192.168.2.132 f2.wg0 f2.wg0.wan.buetow.org +192.168.2.133 f3.wg0 f3.wg0.wan.buetow.org 192.168.2.120 r0.wg0 r0.wg0.wan.buetow.org 192.168.2.121 r1.wg0 r1.wg0.wan.buetow.org 192.168.2.122 r2.wg0 r2.wg0.wan.buetow.org @@ -179,6 +181,7 @@ Add to `/etc/hosts` on each host (FreeBSD and Rocky Linux): fd42:beef:cafe:2::130 f0.wg0.wan.buetow.org fd42:beef:cafe:2::131 f1.wg0.wan.buetow.org fd42:beef:cafe:2::132 f2.wg0.wan.buetow.org +fd42:beef:cafe:2::133 f3.wg0.wan.buetow.org fd42:beef:cafe:2::120 r0.wg0.wan.buetow.org fd42:beef:cafe:2::121 r1.wg0.wan.buetow.org fd42:beef:cafe:2::122 r2.wg0.wan.buetow.org @@ -186,6 +189,17 @@ fd42:beef:cafe:2::110 blowfish.wg0.wan.buetow.org fd42:beef:cafe:2::111 fishfinger.wg0.wan.buetow.org ``` +## Troubleshooting: `reload` vs `restart` When Adding New Peers + +`service wireguard reload` (used by the mesh generator) updates peer config but **does NOT add routes** for new peers. After adding a new host to the mesh, the other hosts need a full restart to get the new routes: + +```sh +# On each existing host that had a new peer added via reload: +doas service wireguard restart +``` + +**Symptom**: WireGuard handshake succeeds (both sides show `latest handshake`) but TCP/ICMP traffic doesn't flow — confirmed by `netstat -rn | grep 192.168.2.NNN` returning no results. + ## WireGuard Mesh Generator Manually creating 8+ wg0.conf files is error-prone. A Ruby script automates this: @@ -201,6 +215,19 @@ Config file: `wireguardmeshgenerator.yaml` — defines all hosts, their LAN/WG I The script generates all configs and can push them via SSH. +### FreeBSD 15.0 fix applied to generator + +`wireguardmeshgenerator.rb` line 151 was updated from `/24` to `/32` for FreeBSD hosts: + +```ruby +# Before (broken on FreeBSD 15.0 — start fails with "setting interface address without mask"): +ipv4_with_mask = hosts[myself]['os'] == 'FreeBSD' ? "#{ipv4}/24" : ipv4 +# After (correct): +ipv4_with_mask = hosts[myself]['os'] == 'FreeBSD' ? "#{ipv4}/32" : ipv4 +``` + +Note: `reload` only reconfigures peers/PSKs — it does not change the running interface address. A `restart` is needed to pick up the address change if the interface is already running. + ## Traffic Flows | Flow | Purpose | |
