1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
|
# f3 Host Expansion Plan
## Goal
Bring `f3.lan.buetow.org` into the `f3s` environment so it is aligned with the real role of `f2`: a non-`zrepl`, non-CARP, non-NFS-HA FreeBSD node that participates in the common host baseline, WireGuard mesh, UPS shutdown coordination, local virtualization pattern, and monitoring.
This plan intentionally excludes:
- `zrepl`
- CARP
- NFS server HA failover
- `stunnel` NFS server role
- `carpcontrol.sh` and related failover automation
## Addressing Plan
Use the next free numbers in the existing address sequences.
- LAN IPv4: `192.168.1.133/24`
- Host short name: `f3`
- Host FQDN: `f3.lan.buetow.org`
- WireGuard IPv4: `192.168.2.133/24`
- WireGuard IPv6: `fd42:beef:cafe:2::133`
Expected host entries:
```txt
192.168.1.133 f3 f3.lan f3.lan.buetow.org
192.168.2.133 f3.wg0 f3.wg0.wan.buetow.org
fd42:beef:cafe:2::133 f3.wg0 f3.wg0.wan.buetow.org
```
## Scope
### Systems that need direct changes
- `f3`
- `f0`
- `f1`
- `f2`
- `r0`
- `r1`
- `r2`
- OpenBSD edge hosts that carry the static WireGuard host list
- Any admin machine with the same hand-maintained `/etc/hosts` map
- Prometheus scrape configuration
### Assumptions
- The FreeBSD ethernet interface on `f3` is `re0`, matching the earlier nodes. Verify with `ifconfig` before writing `rc.conf`.
- The home router/default gateway remains `192.168.1.1`.
- `f0` remains the UPS master node with the USB-connected APC.
- `f2` does not run `zrepl`, and `f3` should mirror that fact.
- If `f2` runs a local Rocky VM via `vm-bhyve`, `f3` should do the same. If `f2` does not, skip the VM provisioning section.
- If `f2` carries the extra SSD, USB key store, and encrypted datasets in practice, mirror that layout on `f3`; otherwise align with the real host, not the generalized blog wording.
## Global References
Primary local references from the blog series:
- `2024-12-03-f3s-kubernetes-with-freebsd-part-2.gmi.tpl`
- `2025-02-01-f3s-kubernetes-with-freebsd-part-3.gmi.tpl`
- `2025-04-05-f3s-kubernetes-with-freebsd-part-4.gmi.tpl`
- `2025-05-11-f3s-kubernetes-with-freebsd-part-5.gmi.tpl`
- `2025-07-14-f3s-kubernetes-with-freebsd-part-6.gmi.tpl`
- `2025-10-02-f3s-kubernetes-with-freebsd-part-7.gmi.tpl`
- `2025-12-07-f3s-kubernetes-with-freebsd-part-8.gmi.tpl`
This file is the authoritative execution plan for the rollout.
## Step 1: Install FreeBSD on f3 and Obtain Initial Access
Install FreeBSD on `f3` with the same baseline choices used on the original hosts:
- Guided ZFS on root with pool `zroot`
- Unencrypted root
- Enable SSH daemon
- Enable NTP service and time sync
- Enable `powerd`
- Create user `paul`
- Add `paul` to group `wheel`
Initial access can be via:
- the console, or
- a temporary DHCP address assigned by the router
Before making persistent network changes, confirm:
- detected interface name via `ifconfig`
- temporary DHCP lease currently in use
- outbound connectivity and SSH reachability
## Step 2: Convert f3 from DHCP to Static LAN Networking
After logging in over the temporary DHCP address, configure `rc.conf` to match the static pattern used by the other nodes.
First verify the interface name:
```sh
ifconfig
```
Then configure:
```sh
doas sysrc hostname="f3.lan.buetow.org"
doas sysrc ifconfig_re0="inet 192.168.1.133 netmask 255.255.255.0"
doas sysrc defaultrouter="192.168.1.1"
```
Apply the change:
```sh
doas service netif restart
doas service routing restart
```
Reconnect over:
```sh
ssh paul@192.168.1.133
```
Validation:
- `hostname` returns `f3.lan.buetow.org`
- `ifconfig re0` shows `192.168.1.133`
- default route points to `192.168.1.1`
- host is reachable by SSH on the static address
## Step 3: Apply the Common FreeBSD Baseline on f3
Install the common package baseline:
```sh
doas pkg install helix doas zfs-periodic uptimed
```
Apply the baseline settings:
- copy `/usr/local/etc/doas.conf.sample` to `/usr/local/etc/doas.conf`
- add the same `zfs-periodic` retention policy for `zroot`
- configure `uptimed`
- fully patch the host using `freebsd-update`, `pkg update`, and `pkg upgrade`
Validation:
- `doas` works for `paul`
- `uprecords` works
- periodic ZFS snapshot settings are present in `/etc/periodic.conf`
## Step 4: Update /etc/hosts Everywhere
Update `/etc/hosts` on all involved systems so `f3` is resolvable consistently.
### Add on FreeBSD hosts
- `f0`
- `f1`
- `f2`
- `f3`
Add:
```txt
192.168.1.133 f3 f3.lan f3.lan.buetow.org
192.168.2.133 f3.wg0 f3.wg0.wan.buetow.org
fd42:beef:cafe:2::133 f3.wg0 f3.wg0.wan.buetow.org
```
### Add on Rocky nodes
- `r0`
- `r1`
- `r2`
Add the same three lines above.
### Add on OpenBSD edge hosts
Any edge node with the mesh host list should get the WireGuard entries:
```txt
192.168.2.133 f3.wg0 f3.wg0.wan.buetow.org
fd42:beef:cafe:2::133 f3.wg0 f3.wg0.wan.buetow.org
```
If those systems also maintain LAN-side entries for FreeBSD hosts, add the LAN line as well.
### Add on admin machines
Any laptop or workstation with the same static map should be updated.
Validation:
- `ping f3.lan.buetow.org` works from LAN systems
- `ping f3.wg0.wan.buetow.org` works once WireGuard is configured
- no duplicate or conflicting host entries are introduced
## Step 5: Configure UPS Partner Behavior on f3
Install `apcupsd` on `f3` and configure it like `f2`, consuming UPS state remotely from `f0`.
Required intent:
- `UPSCABLE ether`
- `UPSTYPE net`
- `DEVICE f0.lan.buetow.org:3551`
- `BATTERYLEVEL 10`
- `MINUTES 6`
Enable and start the service:
```sh
doas sysrc apcupsd_enable=YES
doas service apcupsd start
```
Validation:
- `apcaccess` on `f3` returns data
- `apcaccess -h f0.lan.buetow.org` works from `f3`
- service is enabled for boot
## Step 6: Add f3 to the WireGuard Mesh
Install and enable WireGuard on `f3`:
```sh
doas pkg install wireguard-tools
doas sysrc wireguard_interfaces=wg0
doas sysrc wireguard_enable=YES
doas mkdir -p /usr/local/etc/wireguard
doas touch /usr/local/etc/wireguard/wg0.conf
```
Configure `wg0` with:
- IPv4 `192.168.2.133/24`
- IPv6 `fd42:beef:cafe:2::133`
Then update all existing mesh peers to include `f3`:
- `f0`
- `f1`
- `f2`
- `r0`
- `r1`
- `r2`
- OpenBSD edge hosts
Because the topology is full mesh, this is a cluster-wide change, not a host-local change.
Validation:
- `wg show` on `f3` shows all intended peers
- existing nodes show `f3` as a peer
- `ping 192.168.2.130`, `.131`, `.132` from `f3` works
- `ping fd42:beef:cafe:2::130` etc. works if IPv6 WG is enabled end-to-end
## Step 7: Mirror the Real Local Storage Layout of f2 on f3
This step depends on what `f2` actually runs today.
If `f2` has the extra SSD, USB key filesystem, and encrypted datasets, do the same on `f3`:
- install the second SSD
- create `zdata`
- create and mount `/keys`
- generate `f3.lan.buetow.org:bhyve.key`
- generate `f3.lan.buetow.org:zdata.key`
- replicate the shared key distribution model used on the existing storage-capable hosts
- create encrypted datasets and enable automatic key loading
If `f2` does not use this storage path in practice, skip it and keep `f3` aligned with the actual host.
Validation:
- `zpool list` shows the expected pools
- `/keys` is mounted at boot if used
- encrypted datasets load keys correctly after reboot if used
## Step 8: Configure vm-bhyve and the Host-Local Rocky VM on f3
Only do this if `f2` has the normal host-local VM pattern.
On `f3`:
- install `vm-bhyve` and `bhyve-firmware`
- enable `vm_enable`
- set `vm_dir=zfs:zroot/bhyve`
- create the `public` switch on `re0`
- configure autostart with `vm_list="rocky"`
- provision the Rocky VM analogous to the existing `r2` pattern
If this expands the cluster with a new Rocky guest, define the next matching addresses:
- LAN IPv4 for VM: `192.168.1.123/24`
- WG IPv4 for VM: `192.168.2.123/24`
- WG IPv6 for VM: `fd42:beef:cafe:2::123`
- Hostname: `r3.lan.buetow.org`
If no new VM should exist on `f3`, explicitly skip this step.
Validation:
- `vm list` shows the expected guest
- guest boots cleanly
- guest has static network config if created
- `/etc/hosts` updates are extended for `r3` if `r3` is created
## Step 9: Install and Integrate Monitoring for f3
Install `node_exporter` on `f3`:
```sh
doas pkg install -y node_exporter
doas sysrc node_exporter_enable=YES
doas sysrc node_exporter_args='--web.listen-address=192.168.2.133:9100'
doas service node_exporter start
```
Update Prometheus additional scrape config to include:
```yaml
- '192.168.2.133:9100' # f3 via WireGuard
```
If you use the ZFS textfile collector script on the FreeBSD hosts, deploy the same script and cron entry to `f3` if and only if `f3` has the same relevant ZFS layout.
Validation:
- `curl -s http://192.168.2.133:9100/metrics | head -3`
- Prometheus target becomes healthy
- Grafana Node Exporter dashboards show `f3`
## Step 10: Mirror Required Local FreeBSD Service Users on f3
If `f2` carries local UID/GID mappings required for shared storage ownership, reproduce them on `f3`.
Known example from the blog:
```sh
doas pw groupadd postgres -g 999
doas pw useradd postgres -u 999 -g postgres -d /var/db/postgres -s /usr/sbin/nologin
```
Only add the accounts that actually exist on `f2` today.
Validation:
- `id postgres` matches expected UID/GID if created
- user list on `f3` matches `f2` for the intended service accounts
## Step 11: End-to-End Validation
Perform a final verification pass:
- `f3` is reachable on `192.168.1.133`
- hostname and `/etc/hosts` are correct everywhere
- `apcupsd` on `f3` reads from `f0`
- WireGuard connectivity is established across the mesh
- monitoring is scraping `f3`
- optional local storage layout matches `f2`
- optional VM layout matches `f2`
- no accidental `zrepl`, CARP, NFS HA, or server-side `stunnel` config was added
Document any intentional deviations from `f2`.
## Task Breakdown
The implementation should be tracked as one task per major step:
1. Install FreeBSD and obtain initial access on `f3`
2. Convert `f3` from DHCP to static LAN networking
3. Apply the common FreeBSD baseline on `f3`
4. Update `/etc/hosts` on all involved systems
5. Configure `apcupsd` on `f3`
6. Add `f3` to the WireGuard mesh
7. Mirror the real local storage layout of `f2` on `f3`
8. Configure `vm-bhyve` and optional host-local Rocky VM on `f3`
9. Install and integrate monitoring for `f3`
10. Mirror required local service users on `f3`
11. Run final end-to-end validation
All implementation tasks should reference this file directly:
- `/home/paul/git/foo.zone-content/gemtext/gemfeed/f3-sync-plan.md`
|