summaryrefslogtreecommitdiff
path: root/gemfeed/2025-10-02-f3s-kubernetes-with-freebsd-part-7.md
blob: a0bb7fd728bbc4751766ef735bc5297c7343b1ff (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
# f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments

> Published at 2025-10-02T11:27:19+03:00, last updated Tue 30 Dec 10:11:58 EET 2025

This is the seventh blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.

[2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage](./2024-11-17-f3s-kubernetes-with-freebsd-part-1.md)  
[2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation](./2024-12-03-f3s-kubernetes-with-freebsd-part-2.md)  
[2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts](./2025-02-01-f3s-kubernetes-with-freebsd-part-3.md)  
[2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs](./2025-04-05-f3s-kubernetes-with-freebsd-part-4.md)  
[2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network](./2025-05-11-f3s-kubernetes-with-freebsd-part-5.md)  
[2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage](./2025-07-14-f3s-kubernetes-with-freebsd-part-6.md)  
[2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)](./2025-10-02-f3s-kubernetes-with-freebsd-part-7.md)  
[2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability](./2025-12-07-f3s-kubernetes-with-freebsd-part-8.md)  

[![f3s logo](./f3s-kubernetes-with-freebsd-part-1/f3slogo.png "f3s logo")](./f3s-kubernetes-with-freebsd-part-1/f3slogo.png)  

## Table of Contents

* [⇢ f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments](#f3s-kubernetes-with-freebsd---part-7-k3s-and-first-pod-deployments)
* [⇢ ⇢ Introduction](#introduction)
* [⇢ ⇢ Important Note: GitOps Migration](#important-note-gitops-migration)
* [⇢ ⇢ Updating](#updating)
* [⇢ ⇢ Installing k3s](#installing-k3s)
* [⇢ ⇢ ⇢ Generating `K3S_TOKEN` and starting the first k3s node](#generating-k3stoken-and-starting-the-first-k3s-node)
* [⇢ ⇢ ⇢ Adding the remaining nodes to the cluster](#adding-the-remaining-nodes-to-the-cluster)
* [⇢ ⇢ Test deployments](#test-deployments)
* [⇢ ⇢ ⇢ Test deployment to Kubernetes](#test-deployment-to-kubernetes)
* [⇢ ⇢ ⇢ Test deployment with persistent volume claim](#test-deployment-with-persistent-volume-claim)
* [⇢ ⇢ ⇢ Scaling Traefik for faster failover](#scaling-traefik-for-faster-failover)
* [⇢ ⇢ Make it accessible from the public internet](#make-it-accessible-from-the-public-internet)
* [⇢ ⇢ ⇢ OpenBSD relayd configuration](#openbsd-relayd-configuration)
* [⇢ ⇢ ⇢ Automatic failover when f3s cluster is down](#automatic-failover-when-f3s-cluster-is-down)
* [⇢ ⇢ ⇢ OpenBSD httpd fallback configuration](#openbsd-httpd-fallback-configuration)
* [⇢ ⇢ Exposing services via LAN ingress](#exposing-services-via-lan-ingress)
* [⇢ ⇢ Deploying the private Docker image registry](#deploying-the-private-docker-image-registry)
* [⇢ ⇢ ⇢ Prepare the NFS-backed storage](#prepare-the-nfs-backed-storage)
* [⇢ ⇢ ⇢ Install (or upgrade) the chart](#install-or-upgrade-the-chart)
* [⇢ ⇢ ⇢ Allow nodes and workstations to trust the registry](#allow-nodes-and-workstations-to-trust-the-registry)
* [⇢ ⇢ ⇢ Pushing and pulling images](#pushing-and-pulling-images)
* [⇢ ⇢ Example: Anki Sync Server from the private registry](#example-anki-sync-server-from-the-private-registry)
* [⇢ ⇢ ⇢ Build and push the image](#build-and-push-the-image)
* [⇢ ⇢ ⇢ Create the Anki secret and storage on the cluster](#create-the-anki-secret-and-storage-on-the-cluster)
* [⇢ ⇢ ⇢ Deploy the chart](#deploy-the-chart)
* [⇢ ⇢ NFSv4 UID mapping for Postgres-backed (and other) apps](#nfsv4-uid-mapping-for-postgres-backed-and-other-apps)
* [⇢ ⇢ ⇢ Helm charts currently in service](#helm-charts-currently-in-service)

## Introduction

In this blog post, I am finally going to install k3s (the Kubernetes distribution I use) to the whole setup and deploy the first workloads (helm charts, and a private registry) to it.

[https://k3s.io](https://k3s.io)  

## Important Note: GitOps Migration

**Note:** After publishing this blog post, the f3s cluster was migrated from imperative Helm deployments to declarative GitOps using ArgoCD. The Kubernetes manifests and Helm charts in the repository have been reorganized for ArgoCD-based continuous deployment.

**To view the exact manifests and charts as they existed when this blog post was written** (before the ArgoCD migration), check out the pre-ArgoCD revision:

```sh
$ git clone https://codeberg.org/snonux/conf.git
$ cd conf
$ git checkout 15a86f3  # Last commit before ArgoCD migration
$ cd f3s/
```

**Current master branch** contains the ArgoCD-managed versions with:
- Application manifests organized under `argocd-apps/{monitoring,services,infra,test}/`
- Additional resources under `*/manifests/` directories (e.g., `prometheus/manifests/`)
- Justfiles updated to trigger ArgoCD syncs instead of direct Helm commands

The deployment concepts and architecture remain the same—only the deployment method changed from imperative (`helm install/upgrade`) to declarative (GitOps with ArgoCD).

## Updating

Before proceeding, I bring all systems involved up-to-date. On all three Rocky Linux 9 boxes `r0`, `r1`, and `r2`:

```sh
dnf update -y
reboot
```

On the FreeBSD hosts, I upgraded from FreeBSD 14.2 to 14.3-RELEASE, running this on all three hosts `f0`, `f1` and `f2`:

```sh
paul@f0:~ % doas freebsd-update fetch
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % doas freebsd-update -r 14.3-RELEASE upgrade
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % doas freebsd-update install
paul@f0:~ % doas pkg update
paul@f0:~ % doas pkg upgrade
paul@f0:~ % doas reboot
.
.
.
paul@f0:~ % uname -a
FreeBSD f0.lan.buetow.org 14.3-RELEASE FreeBSD 14.3-RELEASE
        releng/14.3-n271432-8c9ce319fef7 GENERIC amd64
```

## Installing k3s

### Generating `K3S_TOKEN` and starting the first k3s node

I generated the k3s token on my Fedora laptop with `pwgen -n 32` and selected one of the results. Then, on all three `r` hosts, I ran the following (replace SECRET_TOKEN with the actual secret):

```sh
[root@r0 ~]# echo -n SECRET_TOKEN > ~/.k3s_token
```

The following steps are also documented on the k3s website:

[https://docs.k3s.io/datastore/ha-embedded](https://docs.k3s.io/datastore/ha-embedded)  

To bootstrap k3s on the first node, I ran this on `r0`:

```sh
[root@r0 ~]# curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \
        sh -s - server --cluster-init \
        --node-ip=192.168.2.120 \
        --advertise-address=192.168.2.120 \
        --tls-san=r0.wg0.wan.buetow.org
[INFO]  Finding release for channel stable
[INFO]  Using v1.32.6+k3s1 as release
.
.
.
[INFO]  systemd: Starting k3s
```

Note: The `--node-ip` and `--advertise-address` flags are important to ensure that the embedded etcd cluster communicates over the WireGuard interface (192.168.2.x) rather than the LAN interface (192.168.1.x). This ensures that all control plane traffic is encrypted via WireGuard.

### Adding the remaining nodes to the cluster

Then I ran on the other two nodes `r1` and `r2`:

```sh
[root@r1 ~]# curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \
        sh -s - server --server https://r0.wg0.wan.buetow.org:6443 \
        --node-ip=192.168.2.121 \
        --advertise-address=192.168.2.121 \
        --tls-san=r1.wg0.wan.buetow.org

[root@r2 ~]# curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s_token) \
        sh -s - server --server https://r0.wg0.wan.buetow.org:6443 \
        --node-ip=192.168.2.122 \
        --advertise-address=192.168.2.122 \
        --tls-san=r2.wg0.wan.buetow.org
.
.
.

```

Once done, I had a three-node Kubernetes cluster control plane:

```sh
[root@r0 ~]# kubectl get nodes
NAME                STATUS   ROLES                       AGE     VERSION
r0.lan.buetow.org   Ready    control-plane,etcd,master   4m44s   v1.32.6+k3s1
r1.lan.buetow.org   Ready    control-plane,etcd,master   3m13s   v1.32.6+k3s1
r2.lan.buetow.org   Ready    control-plane,etcd,master   30s     v1.32.6+k3s1

[root@r0 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-5688667fd4-fs2jj                  1/1     Running     0          5m27s
kube-system   helm-install-traefik-crd-f9hgd            0/1     Completed   0          5m27s
kube-system   helm-install-traefik-zqqqk                0/1     Completed   2          5m27s
kube-system   local-path-provisioner-774c6665dc-jqlnc   1/1     Running     0          5m27s
kube-system   metrics-server-6f4c6675d5-5xpmp           1/1     Running     0          5m27s
kube-system   svclb-traefik-411cec5b-cdp2l              2/2     Running     0          78s
kube-system   svclb-traefik-411cec5b-f625r              2/2     Running     0          4m58s
kube-system   svclb-traefik-411cec5b-twrd7              2/2     Running     0          4m2s
kube-system   traefik-c98fdf6fb-lt6fx                   1/1     Running     0          4m58s
```

In order to connect with `kubectl` from my Fedora laptop, I had to copy `/etc/rancher/k3s/k3s.yaml` from `r0` to `~/.kube/config` and then replace the value of the server field with `r0.lan.buetow.org`. kubectl can now manage the cluster. Note that this step has to be repeated when I want to connect to another node of the cluster (e.g. when `r0` is down).

## Test deployments

### Test deployment to Kubernetes

Let's create a test namespace:

```sh
> ~ kubectl create namespace test
namespace/test created

> ~ kubectl get namespaces
NAME              STATUS   AGE
default           Active   6h11m
kube-node-lease   Active   6h11m
kube-public       Active   6h11m
kube-system       Active   6h11m
test              Active   5s

> ~ kubectl config set-context --current --namespace=test
Context "default" modified.
```

And let's also create an Apache test pod:

```sh
> ~ cat <<END > apache-deployment.yaml
# Apache HTTP Server Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache
        image: httpd:latest
        ports:
        # Container port where Apache listens
        - containerPort: 80
END

> ~ kubectl apply -f apache-deployment.yaml
deployment.apps/apache-deployment created

> ~ kubectl get all
NAME                                     READY   STATUS    RESTARTS   AGE
pod/apache-deployment-5fd955856f-4pjmf   1/1     Running   0          7s

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apache-deployment   1/1     1            1           7s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/apache-deployment-5fd955856f   1         1         1       7s
```

Let's also create a service: 

```sh
> ~ cat <<END > apache-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: apache
  name: apache-service
spec:
  ports:
    - name: web
      port: 80
      protocol: TCP
      # Expose port 80 on the service
      targetPort: 80
  selector:
  # Link this service to pods with the label app=apache
    app: apache
END

> ~ kubectl apply -f apache-service.yaml
service/apache-service created

> ~ kubectl get service
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
apache-service   ClusterIP   10.43.249.165   <none>        80/TCP    4s
```

Now let's create an ingress:

> Note: I've modified the hosts listed in this example after I published this blog post to ensure that there aren't any bots scraping it.

```sh
> ~ cat <<END > apache-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apache-ingress
  namespace: test
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
    - host: standby.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
    - host: www.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
END

> ~ kubectl apply -f apache-ingress.yaml
ingress.networking.k8s.io/apache-ingress created

> ~ kubectl describe ingress
Name:             apache-ingress
Labels:           <none>
Namespace:        test
Address:          192.168.2.120,192.168.2.121,192.168.2.122
Ingress Class:    traefik
Default backend:  <default>
Rules:
  Host                    Path  Backends
  ----                    ----  --------
  f3s.foo.zone
                          /   apache-service:80 (10.42.1.11:80)
  standby.f3s.foo.zone
                          /   apache-service:80 (10.42.1.11:80)
  www.f3s.foo.zone
                          /   apache-service:80 (10.42.1.11:80)
Annotations:              spec.ingressClassName: traefik
                          traefik.ingress.kubernetes.io/router.entrypoints: web
Events:                   <none>
```

Notes: 

* In the ingress, I use plain HTTP (web) for the Traefik rule, as all the "production" traffic will be routed through a WireGuard tunnel anyway, as I will show later.

So I tested the Apache web server through the ingress rule:

```sh
> ~ curl -H "Host: www.f3s.foo.zone" http://r0.lan.buetow.org:80
<html><body><h1>It works!</h1></body></html>
```

### Test deployment with persistent volume claim

Next, I modified the Apache example to serve the `htdocs` directory from the NFS share I created in the previous blog post. I used the following manifests. Most of them are the same as before, except for the persistent volume claim and the volume mount in the Apache deployment.

```sh
> ~ cat <<END > apache-deployment.yaml
# Apache HTTP Server Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apache-deployment
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: apache
  template:
    metadata:
      labels:
        app: apache
    spec:
      containers:
      - name: apache
        image: httpd:latest
        ports:
        # Container port where Apache listens
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 15
          periodSeconds: 10
        volumeMounts:
        - name: apache-htdocs
          mountPath: /usr/local/apache2/htdocs/
      volumes:
      - name: apache-htdocs
        persistentVolumeClaim:
          claimName: example-apache-pvc
END

> ~ cat <<END > apache-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: apache-ingress
  namespace: test
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
  rules:
    - host: f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
    - host: standby.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
    - host: www.f3s.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: apache-service
                port:
                  number: 80
END

> ~ cat <<END > apache-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-apache-pv
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /data/nfs/k3svolumes/example-apache-volume-claim
    type: Directory
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-apache-pvc
  namespace: test
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
END

> ~ cat <<END > apache-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: apache
  name: apache-service
  namespace: test
spec:
  ports:
    - name: web
      port: 80
      protocol: TCP
      # Expose port 80 on the service
      targetPort: 80
  selector:
  # Link this service to pods with the label app=apache
    app: apache
END
```

I applied the manifests:

```sh
> ~ kubectl apply -f apache-persistent-volume.yaml
> ~ kubectl apply -f apache-service.yaml
> ~ kubectl apply -f apache-deployment.yaml
> ~ kubectl apply -f apache-ingress.yaml
```

Looking at the deployment, I could see it failed because the directory didn't exist yet on the NFS share (note that I also increased the replica count to 2 so if one node goes down there's already a replica running on another node for faster failover):

```sh
> ~ kubectl get pods
NAME                                 READY   STATUS              RESTARTS   AGE
apache-deployment-5b96bd6b6b-fv2jx   0/1     ContainerCreating   0          9m15s
apache-deployment-5b96bd6b6b-ax2ji   0/1     ContainerCreating   0          9m15s

> ~ kubectl describe pod apache-deployment-5b96bd6b6b-fv2jx | tail -n 5
Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    9m34s                 default-scheduler  Successfully
    assigned test/apache-deployment-5b96bd6b6b-fv2jx to r2.lan.buetow.org
  Warning  FailedMount  80s (x12 over 9m34s)  kubelet            MountVolume.SetUp
    failed for volume "example-apache-pv" : hostPath type check failed:
    /data/nfs/k3svolumes/example-apache is not a directory
```

That's intentional—I needed to create the directory on the NFS share first, so I did that (e.g. on `r0`):

```sh
[root@r0 ~]# mkdir /data/nfs/k3svolumes/example-apache-volume-claim/

[root@r0 ~]# cat <<END > /data/nfs/k3svolumes/example-apache-volume-claim/index.html
<!DOCTYPE html>
<html>
<head>
  <title>Hello, it works</title>
</head>
<body>
  <h1>Hello, it works!</h1>
  <p>This site is served via a PVC!</p>
</body>
</html>
END
```

The `index.html` file gives us some actual content to serve. After deleting the pod, it recreates itself and the volume mounts correctly:

```sh
> ~ kubectl delete pod apache-deployment-5b96bd6b6b-fv2jx

> ~ curl -H "Host: www.f3s.foo.zone" http://r0.lan.buetow.org:80
<!DOCTYPE html>
<html>
<head>
  <title>Hello, it works</title>
</head>
<body>
  <h1>Hello, it works!</h1>
  <p>This site is served via a PVC!</p>
</body>
</html>
```

### Scaling Traefik for faster failover

Traefik (used for ingress on k3s) ships with a single replica by default, but for faster failover I bumped it to two replicas so each worker node runs one pod. That way, if a node disappears, the service stays up while Kubernetes schedules a replacement. Here's the command I used:

```sh
> ~ kubectl -n kube-system scale deployment traefik --replicas=2
```

And the result:

```sh
> ~ kubectl -n kube-system get pods -l app.kubernetes.io/name=traefik
kube-system   traefik-c98fdf6fb-97kqk   1/1   Running   19 (53d ago)   64d
kube-system   traefik-c98fdf6fb-9npg2   1/1   Running   11 (53d ago)   61d
```

## Make it accessible from the public internet

Next, I made this accessible through the public internet via the `www.f3s.foo.zone` hosts. As a reminder from part 1 of this series, I reviewed the section titled "OpenBSD/relayd to the rescue for external connectivity":

[f3s: Kubernetes with FreeBSD - Part 1: Setting the stage](./2024-11-17-f3s-kubernetes-with-freebsd-part-1.md)  

> All apps should be reachable through the internet (e.g., from my phone or computer when travelling). For external connectivity and TLS management, I've got two OpenBSD VMs (one hosted by OpenBSD Amsterdam and another hosted by Hetzner) handling public-facing services like DNS, relaying traffic, and automating Let's Encrypt certificates.

> All of this (every Linux VM to every OpenBSD box) will be connected via WireGuard tunnels, keeping everything private and secure. There will be 6 WireGuard tunnels (3 k3s nodes times two OpenBSD VMs).

> So, when I want to access a service running in k3s, I will hit an external DNS endpoint (with the authoritative DNS servers being the OpenBSD boxes). The DNS will resolve to the master OpenBSD VM (see my KISS highly-available with OpenBSD blog post), and from there, the relayd process (with a Let's Encrypt certificate—see my Let's Encrypt with OpenBSD and Rex blog post) will accept the TCP connection and forward it through the WireGuard tunnel to a reachable node port of one of the k3s nodes, thus serving the traffic.

```sh
> ~ curl https://f3s.foo.zone
<html><body><h1>It works!</h1></body></html>

> ~ curl https://www.f3s.foo.zone
<html><body><h1>It works!</h1></body></html>

> ~ curl https://standby.f3s.foo.zone
<html><body><h1>It works!</h1></body></html>
```

This is how it works in `relayd.conf` on OpenBSD:

### OpenBSD relayd configuration

The OpenBSD edge relays keep the Kubernetes-facing addresses for the f3s ingress endpoints in a shared backend table so TLS traffic for every `f3s` hostname lands on the same pool of k3s nodes (pointing to the WireGuard IP addresses of those nodes - remember, they are running locally in my LAN, wheras the OpenBSD edge relays operate in the public internet):

```
table <f3s> {
  192.168.2.120
  192.168.2.121
  192.168.2.122
}
```

Inside the `http protocol "https"` block each public hostname gets its Let's Encrypt certificate. The protocol configures TLS keypairs for all f3s services and other public endpoints. For f3s hosts specifically, there are no explicit `forward to` rules in the protocol—they use the relay-level failover mechanism described later. Non-f3s hosts get explicit localhost routing to prevent them from trying the f3s backends:

```
http protocol "https" {
    # TLS certificates for all f3s services
    tls keypair f3s.foo.zone
    tls keypair www.f3s.foo.zone
    tls keypair standby.f3s.foo.zone
    tls keypair anki.f3s.foo.zone
    tls keypair www.anki.f3s.foo.zone
    tls keypair standby.anki.f3s.foo.zone
    tls keypair bag.f3s.foo.zone
    tls keypair www.bag.f3s.foo.zone
    tls keypair standby.bag.f3s.foo.zone
    tls keypair flux.f3s.foo.zone
    tls keypair www.flux.f3s.foo.zone
    tls keypair standby.flux.f3s.foo.zone
    tls keypair audiobookshelf.f3s.foo.zone
    tls keypair www.audiobookshelf.f3s.foo.zone
    tls keypair standby.audiobookshelf.f3s.foo.zone
    tls keypair gpodder.f3s.foo.zone
    tls keypair www.gpodder.f3s.foo.zone
    tls keypair standby.gpodder.f3s.foo.zone
    tls keypair radicale.f3s.foo.zone
    tls keypair www.radicale.f3s.foo.zone
    tls keypair standby.radicale.f3s.foo.zone
    tls keypair vault.f3s.foo.zone
    tls keypair www.vault.f3s.foo.zone
    tls keypair standby.vault.f3s.foo.zone
    tls keypair syncthing.f3s.foo.zone
    tls keypair www.syncthing.f3s.foo.zone
    tls keypair standby.syncthing.f3s.foo.zone
    tls keypair uprecords.f3s.foo.zone
    tls keypair www.uprecords.f3s.foo.zone
    tls keypair standby.uprecords.f3s.foo.zone

    # Explicitly route non-f3s hosts to localhost
    match request header "Host" value "foo.zone" forward to <localhost>
    match request header "Host" value "www.foo.zone" forward to <localhost>
    match request header "Host" value "dtail.dev" forward to <localhost>
    # ... other non-f3s hosts ...

    # NOTE: f3s hosts have NO match rules here!
    # They use relay-level failover (f3s -> localhost backup)
    # See the relay configuration below for automatic failover details
}
```

Both IPv4 and IPv6 listeners reuse the same protocol definition, making the relay transparent for dual-stack clients while still health checking every k3s backend before forwarding traffic over WireGuard:

```
relay "https4" {
    listen on 46.23.94.99 port 443 tls
    protocol "https"
    # Primary: f3s cluster (with health checks) - Falls back to localhost when all hosts down
    forward to <f3s> port 80 check tcp
    forward to <localhost> port 8080
}

relay "https6" {
    listen on 2a03:6000:6f67:624::99 port 443 tls
    protocol "https"
    # Primary: f3s cluster (with health checks) - Falls back to localhost when all hosts down
    forward to <f3s> port 80 check tcp
    forward to <localhost> port 8080
}
```

In practice, that means relayd terminates TLS with the correct certificate, keeps the three WireGuard-connected backends in rotation, and ships each request to whichever bhyve VM answers first.

### Automatic failover when f3s cluster is down

> Update: This section was added at Tue 30 Dec 10:11:44 EET 2025

One important aspect of this setup is graceful degradation: when all three f3s nodes are unreachable (e.g., during maintenance or a power outage in my LAN), users should see a friendly status page instead of an error message.

OpenBSD's relayd supports automatic failover through its health check mechanism. According to the relayd.conf manual:

> This directive can be specified multiple times - subsequent entries will be used as the backup table if all hosts in the previous table are down.

The key is the order of `forward to` statements in the relay configuration. By placing the f3s table first with `check tcp` health checks, followed by localhost as a backup, relayd automatically routes traffic based on backend availability:

When f3s cluster is UP:

* Health checks on port 80 succeed for f3s nodes
* All f3s traffic routes to the Kubernetes cluster
* Localhost backup remains idle

When f3s cluster is DOWN:

* All health checks fail (nodes unreachable)
* The `<f3s>` table becomes unavailable
* Traffic automatically falls back to `<localhost>` on port 8080
* OpenBSD's httpd serves a static fallback page

```
# NEW configuration - supports automatic failover
http protocol "https" {
    # Explicitly route non-f3s hosts to localhost
    match request header "Host" value "foo.zone" forward to <localhost>
    match request header "Host" value "dtail.dev" forward to <localhost>
    # ... other non-f3s hosts ...

    # f3s hosts have NO protocol rules - they use relay-level failover
    # (no match rules for f3s.foo.zone, anki.f3s.foo.zone, etc.)
}

relay "https4" {
    # f3s FIRST (with health checks), localhost as BACKUP
    forward to <f3s> port 80 check tcp
    forward to <localhost> port 8080
}
```

This way, f3s traffic uses the relay's default behavior: try the first table, fall back to the second when health checks fail.

### OpenBSD httpd fallback configuration

The localhost httpd service on port 8080 serves the fallback content from `/var/www/htdocs/f3s_fallback/`. This directory contains a simple HTML page explaining the situation.

The key configuration detail is using `request rewrite` to ensure the fallback page is served for ALL paths, not just the root. Without this, accessing paths like `/login?redirect=/files/` would return 404 instead of the fallback page:

```
# OpenBSD httpd.conf
# Fallback for f3s hosts - serve fallback page for ALL paths
server "f3s.foo.zone" {
  listen on * port 8080
  log style forwarded
  location * {
    # Rewrite all requests to /index.html to show fallback page regardless of path
    request rewrite "/index.html"
    root "/htdocs/f3s_fallback"
  }
}

server "anki.f3s.foo.zone" {
  listen on * port 8080
  log style forwarded
  location * {
    request rewrite "/index.html"
    root "/htdocs/f3s_fallback"
  }
}

# ... similar blocks for all f3s hostnames ...
```

The `request rewrite "/index.html"` directive ensures that whether someone accesses `/`, `/login`, `/api/status`, or any other path, they all receive the same fallback page. This prevents confusing 404 errors when users have bookmarked specific URLs or follow deep links while the cluster is down.

The fallback page itself is straightforward:

```html
<!DOCTYPE html>
<html>
<head>
    <title>Server turned off</title>
    <style>
        body {
            font-family: sans-serif;
            text-align: center;
            padding-top: 50px;
        }
        .container {
            max-width: 600px;
            margin: 0 auto;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>Server turned off</h1>
        <p>The servers are all currently turned off.</p>
        <p>Please try again later.</p>
        <p>Or email <a href="mailto:paul@nospam.buetow.org">paul@nospam.buetow.org</a>
           - so I can turn them back on for you!</p>
    </div>
</body>
</html>
```

This approach provides several benefits:

* Automatic detection: Health checks run continuously; no manual intervention needed
* Instant fallback: When all f3s nodes go down, the next request automatically routes to localhost
* Transparent recovery: When f3s comes back online, health checks pass and traffic resumes automatically
* User experience: Visitors see a helpful message instead of connection errors
* No DNS changes: The same hostnames work whether f3s is up or down

This fallback mechanism has proven invaluable during maintenance windows and unexpected outages, ensuring that users always get a response even when the home lab is offline.

## Exposing services via LAN ingress

In addition to external access through the OpenBSD relays, services can also be exposed on the local network using LAN-specific ingresses. This is useful for accessing services from within the home network without going through the internet, reducing latency and providing an alternative path if the external relays are unavailable.

The LAN ingress architecture leverages the existing FreeBSD CARP (Common Address Redundancy Protocol) failover infrastructure that's already in place for NFS-over-TLS (see Part 5). Instead of deploying MetalLB or another LoadBalancer implementation, we reuse the CARP virtual IP (`192.168.1.138`) by adding HTTP/HTTPS forwarding alongside the existing stunnel service on port 2323.

*Architecture overview*:

The LAN access path differs from external access:

**External access (*.f3s.foo.zone):**
```
Internet → OpenBSD relayd (TLS termination, Let's Encrypt)
        → WireGuard tunnel
        → k3s Traefik :80 (HTTP)
        → Service
```

**LAN access (*.f3s.lan.foo.zone):**
```
LAN → FreeBSD CARP VIP (192.168.1.138)
    → FreeBSD relayd (TCP forwarding)
    → k3s Traefik :443 (TLS termination, cert-manager)
    → Service
```

The key architectural decisions:

* FreeBSD `relayd` performs pure TCP forwarding (Layer 4) for ports 80 and 443, not TLS termination
* Traefik inside k3s handles TLS offloading using certificates from cert-manager
* Self-signed CA for LAN domains (no external dependencies)
* CARP provides automatic failover between f0 and f1
* No code changes to applications—just add a LAN ingress resource

*Installing cert-manager*:

First, install cert-manager to handle certificate lifecycle management for LAN services. The installation is automated with a Justfile:

[codeberg.org/snonux/conf/f3s/cert-manager](https://codeberg.org/snonux/conf/src/branch/master/f3s/cert-manager)  

```sh
$ cd conf/f3s/cert-manager
$ just install
kubectl apply -f cert-manager.yaml
# ... cert-manager CRDs and resources created ...
kubectl apply -f self-signed-issuer.yaml
clusterissuer.cert-manager.io/selfsigned-issuer created
clusterissuer.cert-manager.io/selfsigned-ca-issuer created
kubectl apply -f ca-certificate.yaml
certificate.cert-manager.io/selfsigned-ca created
kubectl apply -f wildcard-certificate.yaml
certificate.cert-manager.io/f3s-lan-wildcard created
```

This creates:

* A self-signed ClusterIssuer
* A CA certificate (`f3s-lan-ca`) valid for 10 years
* A CA-signed ClusterIssuer
* A wildcard certificate (`*.f3s.lan.foo.zone`) valid for 90 days with automatic renewal

Verify the certificates:

```sh
$ kubectl get certificate -n cert-manager
NAME               READY   SECRET                 AGE
f3s-lan-wildcard   True    f3s-lan-tls            5m
selfsigned-ca      True    selfsigned-ca-secret   5m
```

The wildcard certificate (`f3s-lan-tls`) needs to be copied to any namespace that uses it:

```sh
$ kubectl get secret f3s-lan-tls -n cert-manager -o yaml | \
    sed 's/namespace: cert-manager/namespace: services/' | \
    kubectl apply -f -
```

*Configuring FreeBSD relayd for LAN access*:

On both FreeBSD hosts (f0, f1), install and configure `relayd` for TCP forwarding:

```sh
paul@f0:~ % doas pkg install -y relayd
```

Create `/usr/local/etc/relayd.conf`:

```
# k3s nodes backend table
table <k3s_nodes> { 192.168.1.120 192.168.1.121 192.168.1.122 }

# TCP forwarding to Traefik (no TLS termination)
relay "lan_http" {
    listen on 192.168.1.138 port 80
    forward to <k3s_nodes> port 80 check tcp
}

relay "lan_https" {
    listen on 192.168.1.138 port 443
    forward to <k3s_nodes> port 443 check tcp
}
```

Note: The IP addresses `192.168.1.120-122` are the LAN IPs of the k3s nodes (r0, r1, r2), not their WireGuard IPs. FreeBSD `relayd` requires PF (Packet Filter) to be enabled. Create a minimal `/etc/pf.conf`:

```
# Basic PF rules for relayd
set skip on lo0
pass in quick
pass out quick
```

Enable PF and relayd:

```sh
paul@f0:~ % doas sysrc pf_enable=YES pflog_enable=YES relayd_enable=YES
paul@f0:~ % doas service pf start
paul@f0:~ % doas service pflog start
paul@f0:~ % doas service relayd start
```

Verify `relayd` is listening on the CARP VIP:

```sh
paul@f0:~ % doas sockstat -4 -l | grep 192.168.1.138
_relayd  relayd   2903  11  tcp4   192.168.1.138:80      *:*
_relayd  relayd   2903  12  tcp4   192.168.1.138:443     *:*
```

Repeat the same configuration on f1. Both hosts will run `relayd` listening on the CARP VIP, but only the CARP MASTER will respond to traffic. When failover occurs, the new MASTER takes over seamlessly.

*Adding LAN ingress to services*:

To expose a service on the LAN, add a second Ingress resource to its Helm chart. Here's an example:

```yaml
---
# LAN Ingress for f3s.lan.foo.zone
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-lan
  namespace: services
  annotations:
    spec.ingressClassName: traefik
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
spec:
  tls:
    - hosts:
        - f3s.lan.foo.zone
      secretName: f3s-lan-tls
  rules:
    - host: f3s.lan.foo.zone
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: service
                port:
                  number: 4533
```

Key points:

* Use `web,websecure` entrypoints (both HTTP and HTTPS)
* Reference the `f3s-lan-tls` secret in the `tls` section
* Use `.f3s.lan.foo.zone` subdomain pattern
* Same backend service as the external ingress

Apply the ingress and test:

```sh
$ kubectl apply -f ingress-lan.yaml
ingress.networking.k8s.io/ingress-lan created

$ curl -k https://f3s.lan.foo.zone
HTTP/2 302 
location: /app/
```

*Client-side DNS and CA setup*:

To access LAN services, clients need DNS entries and must trust the self-signed CA.

Add DNS entries to `/etc/hosts` on your laptop:

```sh
$ sudo tee -a /etc/hosts << 'EOF'
# f3s LAN services
192.168.1.138  f3s.lan.foo.zone
EOF
```

The CARP VIP `192.168.1.138` provides high availability—traffic automatically fails over to the backup host if the master goes down.

Export the self-signed CA certificate:

```sh
$ kubectl get secret selfsigned-ca-secret -n cert-manager -o jsonpath='{.data.ca\.crt}' | \
    base64 -d > f3s-lan-ca.crt
```

Install the CA certificate on Linux (Fedora/Rocky):

```sh
$ sudo cp f3s-lan-ca.crt /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust
```

After trusting the CA, browsers will accept the LAN certificates without warnings.

*Scaling to other services*:

The same pattern can be applied to any service. To add LAN access:

1. Copy the `f3s-lan-tls` secret to the service's namespace (if not already there)
2. Add a LAN Ingress resource using the pattern above
3. Configure DNS: `192.168.1.138  service.f3s.lan.foo.zone`
4. Commit and push (ArgoCD will deploy automatically)

No changes needed to:

* relayd configuration (forwards all traffic)
* cert-manager (wildcard cert covers all `*.f3s.lan.foo.zone`)
* CARP configuration (VIP shared by all services)

*TLS offloaders summary*:

The f3s infrastructure now has three distinct TLS offloaders:

* **OpenBSD relayd**: External internet traffic (`*.f3s.foo.zone`) using Let's Encrypt
* **Traefik (k3s)**: LAN HTTPS traffic (`*.f3s.lan.foo.zone`) using cert-manager
* **stunnel**: NFS-over-TLS (port 2323) using custom PKI

Each serves a different purpose with appropriate certificate management for its use case.

## Deploying the private Docker image registry

As not all Docker images I want to deploy are available on public Docker registries and as I also build some of them by myself, there is the need of a private registry.

All manifests for the f3s stack live in my configuration repository:

[codeberg.org/snonux/conf/f3s](https://codeberg.org/snonux/conf/src/branch/master/f3s)  

Within that repo, the `f3s/registry/` directory contains the Helm chart, a `Justfile`, and a detailed `README`. Here's the condensed walkthrough I used to roll out the registry with Helm.

### Prepare the NFS-backed storage

Create the directory that will hold the registry blobs on the NFS share (I ran this on `r0`, but any node that exports `/data/nfs/k3svolumes` works):

```sh
[root@r0 ~]# mkdir -p /data/nfs/k3svolumes/registry
```

### Install (or upgrade) the chart

Clone the repo (or pull the latest changes) on a workstation that has `helm` configured for the cluster, then deploy the chart. The Justfile wraps the commands, but the raw Helm invocation looks like this:

```sh
$ git clone https://codeberg.org/snonux/conf/f3s.git
$ cd conf/f3s/examples/conf/f3s/registry
$ helm upgrade --install registry ./helm-chart --namespace infra --create-namespace
```

Helm creates the `infra` namespace if it does not exist, provisions a `PersistentVolume`/`PersistentVolumeClaim` pair that points at `/data/nfs/k3svolumes/registry`, and spins up a single registry pod exposed via the `docker-registry-service` NodePort (`30001`). Verify everything is up before continuing:

```sh
$ kubectl get pods --namespace infra
NAME                               READY   STATUS    RESTARTS      AGE
docker-registry-6bc9bb46bb-6grkr   1/1     Running   6 (53d ago)   54d

$ kubectl get svc docker-registry-service -n infra
NAME                      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
docker-registry-service   NodePort   10.43.141.56   <none>        5000:30001/TCP   54d
```

### Allow nodes and workstations to trust the registry

The registry listens on plain HTTP, so both Docker daemons on workstations and the k3s nodes need to treat it as an insecure registry. That's fine for my personal needs, as:

* I don't store any secrets in the images
* I access the registry this way only via my LAN
* I may will change it later on...

On my Fedora workstation where I build images:

```sh
$ cat <<"EOF" | sudo tee /etc/docker/daemon.json >/dev/null
{
  "insecure-registries": [
    "r0.lan.buetow.org:30001",
    "r1.lan.buetow.org:30001",
    "r2.lan.buetow.org:30001"
  ]
}
EOF
$ sudo systemctl restart docker
```

On each k3s node, make `registry.lan.buetow.org` resolve locally and point k3s at the NodePort:

```sh
$ for node in r0 r1 r2; do
>   ssh root@$node "echo '127.0.0.1 registry.lan.buetow.org' >> /etc/hosts"
> done

$ for node in r0 r1 r2; do
> ssh root@$node "cat <<'EOF' > /etc/rancher/k3s/registries.yaml
mirrors:
  "registry.lan.buetow.org:30001":
    endpoint:
      - "http://localhost:30001"
EOF
systemctl restart k3s"
> done
```

Thanks to the relayd configuration earlier in the post, the external hostnames (`f3s.foo.zone`, etc.) can already reach NodePort `30001`, so publishing the registry later to the outside world is just a matter of wiring the DNS the same way as the ingress hosts. But by default, that's not enabled for now due to security reasons.

### Pushing and pulling images

Tag any locally built image with one of the node IPs on port `30001`, then push it. I usually target whichever node is closest to me, but any of the three will do:

```sh
$ docker tag my-app:latest r0.lan.buetow.org:30001/my-app:latest
$ docker push r0.lan.buetow.org:30001/my-app:latest
```

Inside the cluster (or from other nodes), reference the image via the service name that Helm created:

```
image: docker-registry-service:5000/my-app:latest
```

You can test the pull path straight away:

```sh
$ kubectl run registry-test \
>   --image=docker-registry-service:5000/my-app:latest \
>   --restart=Never -n test --command -- sleep 300
```

If the pod pulls successfully, the private registry is ready for use by the rest of the workloads. Note, that the commands above actually don't work, they are only for illustration purpose mentioned here.

## Example: Anki Sync Server from the private registry

One of the first workloads I migrated onto the k3s cluster after standing up the registry was my Anki sync server. The configuration repo ships everything in `examples/conf/f3s/anki-sync-server/`: a Docker build context plus a Helm chart that references the freshly built image.

### Build and push the image

The Dockerfile lives under `docker-image/` and takes the Anki release to compile as an `ANKI_VERSION` build argument. The accompanying `Justfile` wraps the steps, but the raw commands look like this:

```sh
$ cd conf/f3s/examples/conf/f3s/anki-sync-server/docker-image
$ docker build -t anki-sync-server:25.07.5b --build-arg ANKI_VERSION=25.07.5 .
$ docker tag anki-sync-server:25.07.5b \
    r0.lan.buetow.org:30001/anki-sync-server:25.07.5b
$ docker push r0.lan.buetow.org:30001/anki-sync-server:25.07.5b
```

Because every k3s node treats `registry.lan.buetow.org:30001` as an insecure mirror (see above), the push succeeds regardless of which node answers. If you prefer the shortcut, `just f3s` in that directory performs the same build/tag/push sequence.

### Create the Anki secret and storage on the cluster

The Helm chart expects the `services` namespace, a pre-created NFS directory, and a Kubernetes secret that holds the credentials the upstream container understands:

```sh
$ ssh root@r0 "mkdir -p /data/nfs/k3svolumes/anki-sync-server/anki_data"
$ kubectl create namespace services
$ kubectl create secret generic anki-sync-server-secret \
    --from-literal=SYNC_USER1='paul:SECRETPASSWORD' \
    -n services
```

If the `services` namespace already exists, you can skip that line or let Kubernetes tell you the namespace is unchanged.

### Deploy the chart

With the prerequisites in place, install (or upgrade) the chart. It pins the container image to the tag we just pushed and mounts the NFS export via a `PersistentVolume/PersistentVolumeClaim` pair:

```sh
$ cd ../helm-chart
$ helm upgrade --install anki-sync-server . -n services
```

Helm provisions everything referenced in the templates:

```
containers:
- name: anki-sync-server  image: registry.lan.buetow.org:30001/anki-sync-server:25.07.5b
  volumeMounts:
  - name: anki-data
    mountPath: /anki_data
```

Once the release comes up, verify that the pod pulled the freshly pushed image and that the ingress we configured earlier resolves through relayd just like the Apache example.

```sh
$ kubectl get pods -n services
$ kubectl get ingress anki-sync-server-ingress -n services
$ curl https://anki.f3s.foo.zone/health
```

All of this runs solely on first-party images that now live in the private registry, proving the full flow from local bild to WireGuard-exposed service.

## NFSv4 UID mapping for Postgres-backed (and other) apps

NFSv4 only sees numeric user and group IDs, so the `postgres` account created inside the container must exist with the same UID/GID on the Kubernetes worker and on the FreeBSD NFS servers. Otherwise the pod starts with UID 999, the export sees it as an unknown anonymous user, and Postgres fails to initialise its data directory.

To verify things line up end-to-end I run `id` in the container and on the hosts:

```sh
> ~ kubectl exec -n services deploy/miniflux-postgres -- id postgres
uid=999(postgres) gid=999(postgres) groups=999(postgres)

[root@r0 ~]# id postgres
uid=999(postgres) gid=999(postgres) groups=999(postgres)

paul@f0:~ % doas id postgres
uid=999(postgres) gid=99(postgres) groups=999(postgres)
```

The Rocky Linux workers get their matching user with plain `useradd`/`groupadd` (repeat on `r0`, `r1`, and `r2`):

```sh
[root@r0 ~]# groupadd --gid 999 postgres
[root@r0 ~]# useradd --uid 999 --gid 999 \
                --home-dir /var/lib/pgsql \
                --shell /sbin/nologin postgres
```

FreeBSD uses `pw`, so on each NFS server (`f0`, `f1`, `f2`) I created the same account and disabled shell access:

```sh
paul@f0:~ % doas pw groupadd postgres -g 999
paul@f0:~ % doas pw useradd postgres -u 999 -g postgres \
                -d /var/db/postgres -s /usr/sbin/nologin
```

Once the UID/GID exist everywhere, the Miniflux chart in `examples/conf/f3s/miniflux` deploys cleanly. The chart provisions both the application and its bundled Postgres database, mounts the exported directory, and builds the DSN at runtime. The important bits live in `helm-chart/templates/persistent-volumes.yaml` and `deployment.yaml`:

```
# Persistent volume lives on the NFS export
hostPath:
  path: /data/nfs/k3svolumes/miniflux/data
  type: Directory
...
containers:
- name: miniflux-postgres
  image: postgres:17
  volumeMounts:
  - name: miniflux-postgres-data
    mountPath: /var/lib/postgresql/data
```

Follow the `README` beside the chart to create the secrets and the target directory:

```sh
$ cd examples/conf/f3s/miniflux/helm-chart
$ mkdir -p /data/nfs/k3svolumes/miniflux/data
$ kubectl create secret generic miniflux-db-password \
    --from-literal=fluxdb_password='YOUR_PASSWORD' -n services
$ kubectl create secret generic miniflux-admin-password \
    --from-literal=admin_password='YOUR_ADMIN_PASSWORD' -n services
$ helm upgrade --install miniflux . -n services --create-namespace
```

And to verify it's all up:

```
$ kubectl get all --namespace=services | grep mini
pod/miniflux-postgres-556444cb8d-xvv2p   1/1     Running   0             54d
pod/miniflux-server-85d7c64664-stmt9     1/1     Running   0             54d
service/miniflux                   ClusterIP   10.43.47.80     <none>        8080/TCP             54d
service/miniflux-postgres          ClusterIP   10.43.139.50    <none>        5432/TCP             54d
deployment.apps/miniflux-postgres   1/1     1            1           54d
deployment.apps/miniflux-server     1/1     1            1           54d
replicaset.apps/miniflux-postgres-556444cb8d   1         1         1       54d
replicaset.apps/miniflux-server-85d7c64664     1         1         1       54d
```

Or from the repository root I simply run:

### Helm charts currently in service

These are the charts that already live under `examples/conf/f3s` and run on the cluster today (and I'll keep adding more as new services graduate into production):

* `anki-sync-server` — custom-built image served from the private registry, stores decks on `/data/nfs/k3svolumes/anki-sync-server/anki_data`, and authenticates through the `anki-sync-server-secret`.
* `koreade-sync-server` — Sync server for KOReader.
* `audiobookshelf` — media streaming stack with three hostPath mounts (`config`, `audiobooks`, `podcasts`) so the library survives node rebuilds.
* `example-apache` — minimal HTTP service I use for smoke-testing ingress and relayd rules.
* `example-apache-volume-claim` — Apache plus PVC variant that exercises NFS-backed storage for walkthroughs like the one earlier in this post.
* `miniflux` — the Postgres-backed feed reader described above, wired for NFSv4 UID mapping and per-release secrets.
* `opodsync` — podsync deployment with its data directory under `/data/nfs/k3svolumes/opodsync/data`.
* `radicale` — CalDAV/CardDAV (and gpodder) backend with separate `collections` and `auth` volumes.
* `registry` — the plain-HTTP Docker registry exposed on NodePort 30001 and mirrored internally as `registry.lan.buetow.org:30001`.
* `syncthing` — two-volume setup for config and shared data, fronted by the `syncthing.f3s.foo.zone` ingress.
* `wallabag` — read-it-later service with persistent `data` and `images` directories on the NFS export.

I hope you enjoyed this walkthrough. Read the next post of this series:

[f3s: Kubernetes with FreeBSD - Part 8: Observability](./2025-12-07-f3s-kubernetes-with-freebsd-part-8.md)  

Other *BSD-related posts:

[2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability](./2025-12-07-f3s-kubernetes-with-freebsd-part-8.md)  
[2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)](./2025-10-02-f3s-kubernetes-with-freebsd-part-7.md)  
[2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage](./2025-07-14-f3s-kubernetes-with-freebsd-part-6.md)  
[2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network](./2025-05-11-f3s-kubernetes-with-freebsd-part-5.md)  
[2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs](./2025-04-05-f3s-kubernetes-with-freebsd-part-4.md)  
[2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts](./2025-02-01-f3s-kubernetes-with-freebsd-part-3.md)  
[2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation](./2024-12-03-f3s-kubernetes-with-freebsd-part-2.md)  
[2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage](./2024-11-17-f3s-kubernetes-with-freebsd-part-1.md)  
[2024-04-01 KISS high-availability with OpenBSD](./2024-04-01-KISS-high-availability-with-OpenBSD.md)  
[2024-01-13 One reason why I love OpenBSD](./2024-01-13-one-reason-why-i-love-openbsd.md)  
[2022-10-30 Installing DTail on OpenBSD](./2022-10-30-installing-dtail-on-openbsd.md)  
[2022-07-30 Let's Encrypt with OpenBSD and Rex](./2022-07-30-lets-encrypt-with-openbsd-and-rex.md)  
[2016-04-09 Jails and ZFS with Puppet on FreeBSD](./2016-04-09-jails-and-zfs-on-freebsd-with-puppet.md)  

E-Mail your comments to `paul@nospam.buetow.org`

[Back to the main site](../)