From da8145d47bba5440674e2ba0647ab2b8ddb35098 Mon Sep 17 00:00:00 2001 From: Paul Buetow Date: Sat, 6 Dec 2025 23:59:29 +0200 Subject: Update content for html --- gemfeed/atom.xml | 1053 ++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 741 insertions(+), 312 deletions(-) (limited to 'gemfeed/atom.xml') diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index e7b002b0..de85f03d 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,11 +1,732 @@ - 2025-11-18T09:45:04+02:00 + 2025-12-06T23:58:24+02:00 foo.zone feed To be in the .zone! https://foo.zone/ + + f3s: Kubernetes with FreeBSD - Part 8: Observability + + https://foo.zone/gemfeed/2025-12-07-f3s-kubernetes-with-freebsd-part-8.html + 2025-12-06T23:58:24+02:00 + + Paul Buetow aka snonux + paul@dev.buetow.org + + This is the 8th blog post about the f3s series for my self-hosting demands in a home lab. f3s? The 'f' stands for FreeBSD, and the '3s' stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines. + +
+

f3s: Kubernetes with FreeBSD - Part 8: Observability


+
+This is the 8th blog post about the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.
+
+2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
+2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
+2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
+2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
+2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
+2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
+2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability (You are currently reading this)
+
+f3s logo
+
+

Table of Contents


+
+
+

Introduction


+
+In this blog post, I set up a complete observability stack for the k3s cluster. Observability is crucial for understanding what's happening inside the cluster—whether its tracking resource usage, debugging issues, or analysing application behaviour. The stack consists of four main components, all deployed into the monitoring namespace:
+
+
    +
  • Prometheus: time-series database for metrics collection and alerting
  • +
  • Grafana: visualisation and dashboarding frontend
  • +
  • Loki: log aggregation system (like Prometheus, but for logs)
  • +
  • Alloy: telemetry collector that ships logs from all pods to Loki
  • +

+Together, these form the "PLG" stack (Prometheus, Loki, Grafana), which is a popular open-source alternative to commercial observability platforms.
+
+All manifests for the f3s stack live in my configuration repository:
+
+codeberg.org/snonux/conf/f3s codeberg.org/snonux/conf/f3s
+
+

Persistent storage recap


+
+All observability components need persistent storage so that metrics and logs survive pod restarts. As covered in Part 6 of this series, the cluster uses NFS-backed persistent volumes:
+
+f3s: Kubernetes with FreeBSD - Part 6: Storage
+
+The FreeBSD hosts (f0, f1, f2) serve as NFS servers, exporting ZFS datasets that are replicated across hosts using zrepl. The Rocky Linux k3s nodes (r0, r1, r2) mount these exports at /data/nfs/k3svolumes. This directory contains subdirectories for each application that needs persistent storage—including Prometheus, Grafana, and Loki.
+
+For example, the observability stack uses these paths on the NFS share:
+
+
    +
  • /data/nfs/k3svolumes/prometheus/data — Prometheus time-series database
  • +
  • /data/nfs/k3svolumes/grafana/data — Grafana configuration, dashboards, and plugins
  • +
  • /data/nfs/k3svolumes/loki/data — Loki log chunks and index
  • +

+Each path gets a corresponding PersistentVolume and PersistentVolumeClaim in Kubernetes, allowing pods to mount them as regular volumes. Because the underlying storage is ZFS with replication, we get snapshots and redundancy for free.
+
+

The monitoring namespace


+
+First, I created the monitoring namespace where all observability components will live:
+
+ +
$ kubectl create namespace monitoring
+namespace/monitoring created
+
+
+

Installing Prometheus and Grafana


+
+Prometheus and Grafana are deployed together using the kube-prometheus-stack Helm chart from the Prometheus community. This chart bundles Prometheus, Grafana, Alertmanager, and various exporters (Node Exporter, Kube State Metrics) into a single deployment. Ill explain what each component does in detail later when we look at the running pods.
+
+

Prerequisites


+
+Add the Prometheus Helm chart repository:
+
+ +
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+$ helm repo update
+
+
+Create the directories on the NFS server for persistent storage:
+
+ +
[root@r0 ~]# mkdir -p /data/nfs/k3svolumes/prometheus/data
+[root@r0 ~]# mkdir -p /data/nfs/k3svolumes/grafana/data
+
+
+

Deploying with the Justfile


+
+The configuration repository contains a Justfile that automates the deployment. just is a handy command runner—think of it as a simpler, more modern alternative to make. I use it throughout the f3s repository to wrap repetitive Helm and kubectl commands:
+
+just - A handy way to save and run project-specific commands
+codeberg.org/snonux/conf/f3s/prometheus
+
+To install everything:
+
+ +
$ cd conf/f3s/prometheus
+$ just install
+kubectl apply -f persistent-volumes.yaml
+persistentvolume/prometheus-data-pv created
+persistentvolume/grafana-data-pv created
+persistentvolumeclaim/grafana-data-pvc created
+helm install prometheus prometheus-community/kube-prometheus-stack \
+    --namespace monitoring -f persistence-values.yaml
+NAME: prometheus
+LAST DEPLOYED: ...
+NAMESPACE: monitoring
+STATUS: deployed
+
+
+The persistence-values.yaml configures Prometheus and Grafana to use the NFS-backed persistent volumes I mentioned earlier, ensuring data survives pod restarts. The persistent volume definitions bind to specific paths on the NFS share using hostPath volumes—the same pattern used for other services in Part 7:
+
+f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
+
+

Exposing Grafana via ingress


+
+The chart also deploys an ingress for Grafana, making it accessible at grafana.f3s.foo.zone. The ingress configuration follows the same pattern as other services in the cluster—Traefik handles the routing internally, while the OpenBSD edge relays terminate TLS and forward traffic through WireGuard.
+
+Once deployed, Grafana is accessible and comes pre-configured with Prometheus as a data source. You can verify the Prometheus service is running:
+
+ +
$ kubectl get svc -n monitoring prometheus-kube-prometheus-prometheus
+NAME                                    TYPE        CLUSTER-IP      PORT(S)
+prometheus-kube-prometheus-prometheus   ClusterIP   10.43.152.163   9090/TCP,8080/TCP
+
+
+Grafana connects to Prometheus using the internal service URL http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090. The default Grafana credentials are admin/prom-operator, which should be changed immediately after first login.
+
+Grafana dashboard showing Prometheus metrics
+
+Grafana dashboard showing cluster metrics
+
+

Installing Loki and Alloy


+
+While Prometheus handles metrics, Loki handles logs. It's designed to be cost-effective and easy to operate—it doesn't index the contents of logs, only the metadata (labels), making it very efficient for storage.
+
+Alloy is Grafana's telemetry collector (the successor to Promtail). It runs as a DaemonSet on each node, tails container logs, and ships them to Loki.
+
+

Prerequisites


+
+Create the data directory on the NFS server:
+
+ +
[root@r0 ~]# mkdir -p /data/nfs/k3svolumes/loki/data
+
+
+

Deploying Loki and Alloy


+
+The Loki configuration also lives in the repository:
+
+codeberg.org/snonux/conf/f3s/loki
+
+To install:
+
+ +
$ cd conf/f3s/loki
+$ just install
+helm repo add grafana https://grafana.github.io/helm-charts || true
+helm repo update
+kubectl apply -f persistent-volumes.yaml
+persistentvolume/loki-data-pv created
+persistentvolumeclaim/loki-data-pvc created
+helm install loki grafana/loki --namespace monitoring -f values.yaml
+NAME: loki
+LAST DEPLOYED: ...
+NAMESPACE: monitoring
+STATUS: deployed
+...
+helm install alloy grafana/alloy --namespace monitoring -f alloy-values.yaml
+NAME: alloy
+LAST DEPLOYED: ...
+NAMESPACE: monitoring
+STATUS: deployed
+
+
+Loki runs in single-binary mode with a single replica (loki-0), which is appropriate for a home lab cluster. This means there's only one Loki pod running at any time. If the node hosting Loki fails, Kubernetes will automatically reschedule the pod to another worker node—but there will be a brief downtime (typically under a minute) while this happens. For my home lab use case, this is perfectly acceptable.
+
+For full high-availability, you'd deploy Loki in microservices mode with separate read, write, and backend components, backed by object storage like S3 or MinIO instead of local filesystem storage. That's a more complex setup that I might explore in a future blog post—but for now, the single-binary mode with NFS-backed persistence strikes the right balance between simplicity and durability.
+
+

Configuring Alloy


+
+Alloy is configured via alloy-values.yaml to discover all pods in the cluster and forward their logs to Loki:
+
+ +
discovery.kubernetes "pods" {
+  role = "pod"
+}
+
+discovery.relabel "pods" {
+  targets = discovery.kubernetes.pods.targets
+
+  rule {
+    source_labels = ["__meta_kubernetes_namespace"]
+    target_label  = "namespace"
+  }
+
+  rule {
+    source_labels = ["__meta_kubernetes_pod_name"]
+    target_label  = "pod"
+  }
+
+  rule {
+    source_labels = ["__meta_kubernetes_pod_container_name"]
+    target_label  = "container"
+  }
+
+  rule {
+    source_labels = ["__meta_kubernetes_pod_label_app"]
+    target_label  = "app"
+  }
+}
+
+loki.source.kubernetes "pods" {
+  targets    = discovery.relabel.pods.output
+  forward_to = [loki.write.default.receiver]
+}
+
+loki.write "default" {
+  endpoint {
+    url = "http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/push"
+  }
+}
+
+
+This configuration automatically labels each log line with the namespace, pod name, container name, and app label, making it easy to filter logs in Grafana.
+
+

Adding Loki as a Grafana data source


+
+Loki doesn't have its own web UI—you query it through Grafana. First, verify the Loki service is running:
+
+ +
$ kubectl get svc -n monitoring loki
+NAME   TYPE        CLUSTER-IP    PORT(S)
+loki   ClusterIP   10.43.64.60   3100/TCP,9095/TCP
+
+
+To add Loki as a data source in Grafana:
+
+
    +
  • Navigate to Configuration → Data Sources
  • +
  • Click "Add data source"
  • +
  • Select "Loki"
  • +
  • Set the URL to: http://loki.monitoring.svc.cluster.local:3100
  • +
  • Click "Save & Test"
  • +

+Once configured, you can explore logs in Grafana's "Explore" view. I'll show some example queries in the "Using the observability stack" section below.
+
+Exploring logs in Grafana with Loki
+
+

The complete monitoring stack


+
+After deploying everything, here's what's running in the monitoring namespace:
+
+ +
$ kubectl get pods -n monitoring
+NAME                                                     READY   STATUS    RESTARTS   AGE
+alertmanager-prometheus-kube-prometheus-alertmanager-0   2/2     Running   0          42d
+alloy-g5fgj                                              2/2     Running   0          29m
+alloy-nfw8w                                              2/2     Running   0          29m
+alloy-tg9vj                                              2/2     Running   0          29m
+loki-0                                                   2/2     Running   0          25m
+prometheus-grafana-868f9dc7cf-lg2vl                      3/3     Running   0          42d
+prometheus-kube-prometheus-operator-8d7bbc48c-p4sf4      1/1     Running   0          42d
+prometheus-kube-state-metrics-7c5fb9d798-hh2fx           1/1     Running   0          42d
+prometheus-prometheus-kube-prometheus-prometheus-0       2/2     Running   0          42d
+prometheus-prometheus-node-exporter-2nsg9                1/1     Running   0          42d
+prometheus-prometheus-node-exporter-mqr25                1/1     Running   0          42d
+prometheus-prometheus-node-exporter-wp4ds                1/1     Running   0          42d
+
+
+And the services:
+
+ +
$ kubectl get svc -n monitoring
+NAME                                      TYPE        CLUSTER-IP      PORT(S)
+alertmanager-operated                     ClusterIP   None            9093/TCP,9094/TCP
+alloy                                     ClusterIP   10.43.74.14     12345/TCP
+loki                                      ClusterIP   10.43.64.60     3100/TCP,9095/TCP
+loki-headless                             ClusterIP   None            3100/TCP
+prometheus-grafana                        ClusterIP   10.43.46.82     80/TCP
+prometheus-kube-prometheus-alertmanager   ClusterIP   10.43.208.43    9093/TCP,8080/TCP
+prometheus-kube-prometheus-operator       ClusterIP   10.43.246.121   443/TCP
+prometheus-kube-prometheus-prometheus     ClusterIP   10.43.152.163   9090/TCP,8080/TCP
+prometheus-kube-state-metrics             ClusterIP   10.43.64.26     8080/TCP
+prometheus-prometheus-node-exporter       ClusterIP   10.43.127.242   9100/TCP
+
+
+Let me break down what each pod does:
+
+
    +
  • alertmanager-prometheus-kube-prometheus-alertmanager-0: the Alertmanager instance that receives alerts from Prometheus, deduplicates them, groups related alerts together, and routes notifications to the appropriate receivers (email, Slack, PagerDuty, etc.). It runs as a StatefulSet with persistent storage for silences and notification state.
  • +

+
    +
  • alloy-g5fgj, alloy-nfw8w, alloy-tg9vj: three Alloy pods running as a DaemonSet, one on each k3s node. Each pod tails the container logs from its local node via the Kubernetes API and forwards them to Loki. This ensures log collection continues even if a node becomes isolated from the others.
  • +

+
    +
  • loki-0: the single Loki instance running in single-binary mode. It receives log streams from Alloy, stores them in chunks on the NFS-backed persistent volume, and serves queries from Grafana. The -0 suffix indicates it's a StatefulSet pod.
  • +

+
    +
  • prometheus-grafana-...: the Grafana web interface for visualising metrics and logs. It comes pre-configured with Prometheus as a data source and includes dozens of dashboards for Kubernetes monitoring. Dashboards, users, and settings are persisted to the NFS share.
  • +

+
    +
  • prometheus-kube-prometheus-operator-...: the Prometheus Operator that watches for custom resources (ServiceMonitor, PodMonitor, PrometheusRule) and automatically configures Prometheus to scrape new targets. This allows applications to declare their own monitoring requirements.
  • +

+
    +
  • prometheus-kube-state-metrics-...: generates metrics about the state of Kubernetes objects themselves: how many pods are running, pending, or failed; deployment replica counts; node conditions; PVC status; and more. Essential for cluster-level dashboards.
  • +

+
    +
  • prometheus-prometheus-kube-prometheus-prometheus-0: the Prometheus server that scrapes metrics from all configured targets (pods, services, nodes), stores them in a time-series database, evaluates alerting rules, and serves queries to Grafana.
  • +

+
    +
  • prometheus-prometheus-node-exporter-...: three Node Exporter pods running as a DaemonSet, one on each node. They expose hardware and OS-level metrics: CPU usage, memory, disk I/O, filesystem usage, network statistics, and more. These feed the "Node Exporter" dashboards in Grafana.
  • +

+

Using the observability stack


+
+

Viewing metrics in Grafana


+
+The kube-prometheus-stack comes with many pre-built dashboards. Some useful ones include:
+
+
    +
  • Kubernetes / Compute Resources / Cluster: overview of CPU and memory usage across the cluster
  • +
  • Kubernetes / Compute Resources / Namespace (Pods): resource usage by namespace
  • +
  • Node Exporter / Nodes: detailed host metrics like disk I/O, network, and CPU
  • +

+

Querying logs with LogQL


+
+In Grafana's Explore view, select Loki as the data source and try queries like:
+
+
+# All logs from the services namespace
+{namespace="services"}
+
+# Logs from pods matching a pattern
+{pod=~"miniflux.*"}
+
+# Filter by log content
+{namespace="services"} |= "error"
+
+# Parse JSON logs and filter
+{namespace="services"} | json | level="error"
+
+
+

Creating alerts


+
+Prometheus supports alerting rules that can notify you when something goes wrong. The kube-prometheus-stack includes many default alerts for common issues like high CPU usage, pod crashes, and node problems. These can be customised via PrometheusRule CRDs.
+
+

Monitoring external FreeBSD hosts


+
+The observability stack can also monitor servers outside the Kubernetes cluster. The FreeBSD hosts (f0, f1, f2) that serve NFS storage can be added to Prometheus using the Node Exporter.
+
+

Installing Node Exporter on FreeBSD


+
+On each FreeBSD host, install the node_exporter package:
+
+ +
paul@f0:~ % doas pkg install -y node_exporter
+
+
+Enable the service to start at boot:
+
+ +
paul@f0:~ % doas sysrc node_exporter_enable=YES
+node_exporter_enable:  -> YES
+
+
+Configure node_exporter to listen on the WireGuard interface. This ensures metrics are only accessible through the secure tunnel, not the public network. Replace the IP with the host's WireGuard address:
+
+ +
paul@f0:~ % doas sysrc node_exporter_args='--web.listen-address=192.168.2.130:9100'
+node_exporter_args:  -> --web.listen-address=192.168.2.130:9100
+
+
+Start the service:
+
+ +
paul@f0:~ % doas service node_exporter start
+Starting node_exporter.
+
+
+Verify it's running:
+
+ +
paul@f0:~ % curl -s http://192.168.2.130:9100/metrics | head -3
+# HELP go_gc_duration_seconds A summary of the wall-time pause...
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 0
+
+
+Repeat for the other FreeBSD hosts (f1, f2) with their respective WireGuard IPs.
+
+

Adding FreeBSD hosts to Prometheus


+
+Create a file additional-scrape-configs.yaml in the prometheus configuration directory:
+
+
+- job_name: 'node-exporter'
+  static_configs:
+    - targets:
+      - '192.168.2.130:9100'  # f0 via WireGuard
+      - '192.168.2.131:9100'  # f1 via WireGuard
+      - '192.168.2.132:9100'  # f2 via WireGuard
+      labels:
+        os: freebsd
+
+
+The job_name must be node-exporter to match the existing dashboards. The os: freebsd label allows filtering these hosts separately if needed.
+
+Create a Kubernetes secret from this file:
+
+ +
$ kubectl create secret generic additional-scrape-configs \
+    --from-file=additional-scrape-configs.yaml \
+    -n monitoring
+
+
+Update persistence-values.yaml to reference the secret:
+
+
+prometheus:
+  prometheusSpec:
+    additionalScrapeConfigsSecret:
+      enabled: true
+      name: additional-scrape-configs
+      key: additional-scrape-configs.yaml
+
+
+Upgrade the Prometheus deployment:
+
+ +
$ just upgrade
+
+
+After a minute or so, the FreeBSD hosts appear in the Prometheus targets and in the Node Exporter dashboards in Grafana.
+
+FreeBSD hosts in the Node Exporter dashboard
+
+

FreeBSD memory metrics compatibility


+
+The default Node Exporter dashboards are designed for Linux and expect metrics like node_memory_MemAvailable_bytes. FreeBSD uses different metric names (node_memory_size_bytes, node_memory_free_bytes, etc.), so memory panels will show "No data" out of the box.
+
+To fix this, I created a PrometheusRule that generates synthetic Linux-compatible metrics from the FreeBSD equivalents:
+
+
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+  name: freebsd-memory-rules
+  namespace: monitoring
+  labels:
+    release: prometheus
+spec:
+  groups:
+    - name: freebsd-memory
+      rules:
+        - record: node_memory_MemTotal_bytes
+          expr: node_memory_size_bytes{os="freebsd"}
+        - record: node_memory_MemAvailable_bytes
+          expr: node_memory_free_bytes{os="freebsd"} + node_memory_inactive_bytes{os="freebsd"} + node_memory_cache_bytes{os="freebsd"}
+        - record: node_memory_MemFree_bytes
+          expr: node_memory_free_bytes{os="freebsd"}
+        - record: node_memory_Buffers_bytes
+          expr: node_memory_buffer_bytes{os="freebsd"}
+        - record: node_memory_Cached_bytes
+          expr: node_memory_cache_bytes{os="freebsd"}
+
+
+This file is saved as freebsd-recording-rules.yaml and applied as part of the Prometheus installation. The os="freebsd" label (set in the scrape config) ensures these rules only apply to FreeBSD hosts. After applying, the memory panels in the Node Exporter dashboards populate correctly for FreeBSD.
+
+freebsd-recording-rules.yaml on Codeberg
+
+

Disk I/O metrics limitation


+
+Unlike memory metrics, disk I/O metrics (node_disk_read_bytes_total, node_disk_written_bytes_total, etc.) are not available on FreeBSD. The Linux diskstats collector that provides these metrics doesn't have a FreeBSD equivalent in the node_exporter.
+
+The disk I/O panels in the Node Exporter dashboards will show "No data" for FreeBSD hosts. FreeBSD does expose ZFS-specific metrics (node_zfs_arcstats_*) for ARC cache performance, and per-dataset I/O stats are available via sysctl kstat.zfs, but mapping these to the Linux-style metrics the dashboards expect is non-trivial. Creating custom ZFS-specific dashboards is left as an exercise for another day.
+
+

Monitoring external OpenBSD hosts


+
+The same approach works for OpenBSD hosts. I have two OpenBSD edge relay servers (blowfish, fishfinger) that handle TLS termination and forward traffic through WireGuard to the cluster. These can also be monitored with Node Exporter.
+
+

Installing Node Exporter on OpenBSD


+
+On each OpenBSD host, install the node_exporter package:
+
+ +
blowfish:~ $ doas pkg_add node_exporter
+quirks-7.103 signed on 2025-10-13T22:55:16Z
+The following new rcscripts were installed: /etc/rc.d/node_exporter
+See rcctl(8) for details.
+
+
+Enable the service to start at boot:
+
+ +
blowfish:~ $ doas rcctl enable node_exporter
+
+
+Configure node_exporter to listen on the WireGuard interface. This ensures metrics are only accessible through the secure tunnel, not the public network. Replace the IP with the host's WireGuard address:
+
+ +
blowfish:~ $ doas rcctl set node_exporter flags '--web.listen-address=192.168.2.110:9100'
+
+
+Start the service:
+
+ +
blowfish:~ $ doas rcctl start node_exporter
+node_exporter(ok)
+
+
+Verify it's running:
+
+ +
blowfish:~ $ curl -s http://192.168.2.110:9100/metrics | head -3
+# HELP go_gc_duration_seconds A summary of the wall-time pause...
+# TYPE go_gc_duration_seconds summary
+go_gc_duration_seconds{quantile="0"} 0
+
+
+Repeat for the other OpenBSD host (fishfinger) with its respective WireGuard IP (192.168.2.111).
+
+

Adding OpenBSD hosts to Prometheus


+
+Update additional-scrape-configs.yaml to include the OpenBSD targets:
+
+
+- job_name: 'node-exporter'
+  static_configs:
+    - targets:
+      - '192.168.2.130:9100'  # f0 via WireGuard
+      - '192.168.2.131:9100'  # f1 via WireGuard
+      - '192.168.2.132:9100'  # f2 via WireGuard
+      labels:
+        os: freebsd
+    - targets:
+      - '192.168.2.110:9100'  # blowfish via WireGuard
+      - '192.168.2.111:9100'  # fishfinger via WireGuard
+      labels:
+        os: openbsd
+
+
+The os: openbsd label allows filtering these hosts separately from FreeBSD and Linux nodes.
+
+

OpenBSD memory metrics compatibility


+
+OpenBSD uses the same memory metric names as FreeBSD (node_memory_size_bytes, node_memory_free_bytes, etc.), so a similar PrometheusRule is needed to generate Linux-compatible metrics:
+
+
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+  name: openbsd-memory-rules
+  namespace: monitoring
+  labels:
+    release: prometheus
+spec:
+  groups:
+    - name: openbsd-memory
+      rules:
+        - record: node_memory_MemTotal_bytes
+          expr: node_memory_size_bytes{os="openbsd"}
+          labels:
+            os: openbsd
+        - record: node_memory_MemAvailable_bytes
+          expr: node_memory_free_bytes{os="openbsd"} + node_memory_inactive_bytes{os="openbsd"} + node_memory_cache_bytes{os="openbsd"}
+          labels:
+            os: openbsd
+        - record: node_memory_MemFree_bytes
+          expr: node_memory_free_bytes{os="openbsd"}
+          labels:
+            os: openbsd
+        - record: node_memory_Cached_bytes
+          expr: node_memory_cache_bytes{os="openbsd"}
+          labels:
+            os: openbsd
+
+
+This file is saved as openbsd-recording-rules.yaml and applied alongside the FreeBSD rules. Note that OpenBSD doesn't expose a buffer memory metric, so that rule is omitted.
+
+openbsd-recording-rules.yaml on Codeberg
+
+After running just upgrade, the OpenBSD hosts appear in Prometheus targets and the Node Exporter dashboards.
+
+

Summary


+
+With Prometheus, Grafana, Loki, and Alloy deployed, I now have complete visibility into the k3s cluster, the FreeBSD storage servers, and the OpenBSD edge relays:
+
+
    +
  • metrics: Prometheus collects and stores time-series data from all components
  • +
  • Logs: Loki aggregates logs from all containers, searchable via Grafana
  • +
  • Visualisation: Grafana provides dashboards and exploration tools
  • +
  • Alerting: Alertmanager can notify on conditions defined in Prometheus rules
  • +

+This observability stack runs entirely on the home lab infrastructure, with data persisted to the NFS share. It's lightweight enough for a three-node cluster but provides the same capabilities as production-grade setups.
+
+Other *BSD-related posts:
+
+2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability (You are currently reading this)
+2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
+2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
+2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
+2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
+2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
+2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
+2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
+2024-04-01 KISS high-availability with OpenBSD
+2024-01-13 One reason why I love OpenBSD
+2022-10-30 Installing DTail on OpenBSD
+2022-07-30 Let's Encrypt with OpenBSD and Rex
+2016-04-09 Jails and ZFS with Puppet on FreeBSD
+
+E-Mail your comments to paul@nospam.buetow.org
+
+Back to the main site
+
+
+
'The Courage To Be Disliked' book notes @@ -908,6 +1629,7 @@ p hash.values_at(:a, :c) 2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)
+2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

f3s logo

@@ -1942,10 +2664,13 @@ replicaset.apps/miniflux-server-85d7c64664 1 1 1 54d
  • syncthing — two-volume setup for config and shared data, fronted by the syncthing.f3s.foo.zone ingress.
  • wallabag — read-it-later service with persistent data and images directories on the NFS export.

  • -I hope you enjoyed this walkthrough. In the next part of this series, I will likely tackle monitoring, backup, or observability. I haven't fully decided yet which topic to cover next, so stay tuned!
    +I hope you enjoyed this walkthrough. Read the next post of this series:
    +
    +f3s: Kubernetes with FreeBSD - Part 8: Observability

    Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments (You are currently reading this)
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -3251,6 +3976,7 @@ content = "{CODE}" 2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

    f3s logo

    @@ -5268,13 +5994,13 @@ Jul 06 10:
    Both technologies could run on top of our encrypted ZFS volumes, combining ZFS's data integrity and encryption features with distributed storage capabilities. This would be particularly interesting for workloads that need either S3-compatible APIs (MinIO) or transparent distributed POSIX storage (MooseFS). What about Ceph and GlusterFS? Unfortunately, there doesn't seem to be great native FreeBSD support for them. However, other alternatives also appear suitable for my use case.

    -
    Read the next post of this series:

    f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments

    Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage (You are currently reading this)
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -6340,6 +7066,7 @@ Jul 06 10:2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

    f3s logo

    @@ -7320,6 +8047,7 @@ peer: 2htXdNcxzpI2FdPDJy4T4VGtm1wpMEQu1AkQHjNY6F8=
    Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network (You are currently reading this)
    @@ -7904,6 +8632,7 @@ __ejm\___/________dwb`---`______________________ 2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

    f3s logo

    @@ -8480,6 +9209,7 @@ Apr 4 23: Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -9207,6 +9937,7 @@ This is perl, v5.8.8 built 0.freq: 2922
    Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -10614,6 +11348,7 @@ dev.cpu.0.freq: 2922 2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

    f3s logo

    @@ -10765,6 +11500,7 @@ dev.cpu.0.freq: 2922
    Other *BSD-related posts:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -13271,6 +14007,7 @@ http://www.gnu.org/software/src-highlite -->
    Other *BSD and KISS related posts are:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -13640,6 +14377,7 @@ $ doas reboot # Just in case, reboot one more time Other *BSD related posts are:

    +2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability
    2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
    2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
    2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
    @@ -14377,315 +15115,6 @@ echo baz 2023-04-01 "Never split the difference" book notes
    2023-03-16 "The Pragmatic Programmer" book notes

    -Back to the main site
    - - -
    - - KISS static web photo albums with `photoalbum.sh` - - https://foo.zone/gemfeed/2023-10-29-kiss-static-web-photo-albums-with-photoalbum.sh.html - 2023-10-29T22:25:04+02:00 - - Paul Buetow aka snonux - paul@dev.buetow.org - - Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer). - -
    -

    KISS static web photo albums with photoalbum.sh


    -
    -Published at 2023-10-29T22:25:04+02:00
    -
    -Once in a while, I share photos on the inter-web with either family and friends or on my The Irregular Ninja photo site. One hobby of mine is photography (even though I don't have enough time for it - so I am primarily a point-and-shoot photographer).
    -
    -I'm not particularly eager to use any photo social sharing platforms such as Flickr, 500px (I used them regularly in the past), etc., anymore. I value self-hosting, DIY and privacy (nobody should data mine my photos), and no third party should have any rights to my pictures.
    -
    -I value KISS (keep it simple and stupid) and simplicity. All that's required for a web photo album is some simple HTML and spice it up with CSS. No need for JavaScript, no need for a complex dynamic website.
    -
    -
    -         ___        .---------.._
    -  ______!fsc!_....-' .g8888888p. '-------....._
    -.'          //     .g8:       :8p..---....___ \'.
    -| foo.zone //  ()  d88:       :88b|==========! !|
    -|         //       888:       :888|==========| !|
    -|___      \\_______'T88888888888P''----------'//|   
    -|   \       """"""""""""""""""""""""""""""""""/ |   
    -|    !...._____      .="""=.   .[]    ____...!  |   
    -|   /               ! .g$p. !   .[]          :  |   
    -|  !               :  $$$$$  :  .[]          :  |   
    -|  !irregular.ninja ! 'T$P' !   .[]           '.|   
    -|   \__              "=._.="   .()        __    |   
    -|.--'  '----._______________________.----'  '--.|
    -'._____________________________________________.'   
    -
    -
    -

    Table of Contents


    -
    -
    -

    Introducing photoalbum.sh


    -
    -photoalbum.sh is a minimal Bash (Bourne Again Shell) script for Unix-like operating systems (such as Linux) to generate static web photo albums. The resulting static photo album is pure HTML+CSS (without any JavaScript!). It is specially designed to be as simple as possible.
    -
    -

    Installation


    -
    -Installation is straightforward. All required is a recent version of GNU Bash, GNU Make, Git and ImageMagick. On Fedora, the dependencies are installed with:
    -
    -
    -% sudo dnf install -y ImageMagick make git
    -
    -
    -Now, clone, make and install the script:
    -
    -
    -% git clone https://codeberg.org/snonux/photoalbum
    -Cloning into 'photoalbum'...
    -remote: Enumerating objects: 1624, done.
    -remote: Total 1624 (delta 0), reused 0 (delta 0), pack-reused 1624
    -Receiving objects: 100% (1624/1624), 193.36 KiB | 1.49 MiB/s, done.
    -Resolving deltas: 100% (1227/1227), done.
    -
    -% cd photoalbum
    -/home/paul/photoalbum
    -
    -% make
    -cut -d' ' -f2 changelog | head -n 1 | sed 's/(//;s/)//' > .version
    -test ! -d ./bin && mkdir ./bin || exit 0
    -sed "s/PHOTOALBUMVERSION/$(cat .version)/" src/photoalbum.sh > ./bin/photoalbum
    -chmod 0755 ./bin/photoalbum
    -
    -% sudo make install
    -test ! -d /usr/bin && mkdir -p /usr/bin || exit 0
    -cp ./bin/* /usr/bin
    -test ! -d /usr/share/photoalbum/templates && mkdir -p /usr/share/photoalbum/templates || exit 0
    -cp -R ./share/templates /usr/share/photoalbum/
    -test ! -d /etc/default && mkdir -p /etc/default || exit 0
    -cp ./src/photoalbum.default.conf /etc/default/photoalbum
    -
    -
    -You should now have the photoalbum command in your $PATH. But wait to use it! First, it needs to be set up!
    -
    -
    -% photoalbum version
    -This is Photoalbum Version 0.5.1
    -
    -
    -

    Setting it up


    -
    -Now, it's time to set up the Irregular Ninja static web photo album (or any other web photo album you may be setting up!)! Create a directory (here: irregular.ninja for the Irregular Ninja Photo site - or any oter sub-directory reflecting your album's name), and inside of that directory, create an incoming directory. The incoming directory. Copy all photos to be part of the album there.
    -
    -
    -% mkdir irregular.ninja
    -% cd irregular.ninja
    -% # cp -Rpv ~/Photos/your-photos ./incoming
    -
    -
    -In this example, I am skipping the cp ... part as I intend to use an alternative incoming directory, as you will see later in the configuration file.
    -
    -The general usage of potoalbum is as follows:
    -
    -
    -photoalbum clean|generate|version [rcfile] photoalbum
    -photoalbum makemake
    -
    -
    -Whereas:
    -
    -
      -
    • clean: Cleans up the workspace
    • -
    • generate: Generates the static photo album
    • -
    • version: Prints out the version
    • -
    • makemake: Creates a Makefile and photoalbumrc in the current working directory.
    • -

    -So what we will do next is to run the following inside of the irregular.ninja/ directory; it will generate a Makefile and a configuration file photoalbumrc containing a few configurable options:
    -
    - -
    % photoalbum makemake
    -You may now customize ./photoalbumrc and run make
    -
    -% cat Makefile
    -all:
    -	photoalbum generate photoalbumrc
    -clean:
    -	photoalbum clean photoalbumrc
    -
    -% cat photoalbumrc
    -# The title of the photoalbum
    -TITLE='A simple Photoalbum'
    -
    -# Thumbnail height geometry
    -THUMBHEIGHT=300
    -# Normal geometry height (when viewing photo). Uncomment, to keep original size.
    -HEIGHT=1200
    -# Max previews per page.
    -MAXPREVIEWS=40
    -# Randomly shuffle all previews.
    -# SHUFFLE=yes
    -
    -# Diverse directories, need to be full paths, not relative!
    -INCOMING_DIR=$(pwd)/incoming
    -DIST_DIR=$(pwd)/dist
    -TEMPLATE_DIR=/usr/share/photoalbum/templates/default
    -#TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal
    -
    -# Includes a .tar of the incoming dir in the dist, can be yes or no
    -TARBALL_INCLUDE=yes
    -TARBALL_SUFFIX=.tar
    -TAR_OPTS='-c'
    -
    -# Some debugging options
    -#set -e
    -#set -x
    -
    -
    -In the case for irregular.ninja, I changed the defaults to the following:
    -
    - -
    --- photoalbumrc        2023-10-29 21:42:00.894202045 +0200
    -+++ photoalbumrc.new 2023-06-04 10:40:08.030994440 +0300
    -@@ -1,23 +1,24 @@
    - # The title of the photoalbum
    --TITLE='A simple Photoalbum'
    -+TITLE='Irregular.Ninja'
    -
    - # Thumbnail height geometry
    --THUMBHEIGHT=300
    -+THUMBHEIGHT=400
    - # Normal geometry height (when viewing photo). Uncomment, to keep original size.
    --HEIGHT=1200
    -+HEIGHT=1800
    - # Max previews per page.
    - MAXPREVIEWS=40
    --# Randomly shuffle all previews.
    --# SHUFFLE=yes
    -+# Randomly shuffle
    -+SHUFFLE=yes
    -
    - # Diverse directories, need to be full paths, not relative!
    --INCOMING_DIR=$(pwd)/incoming
    -+INCOMING_DIR=~/Nextcloud/Photos/irregular.ninja
    - DIST_DIR=$(pwd)/dist
    - TEMPLATE_DIR=/usr/share/photoalbum/templates/default
    - #TEMPLATE_DIR=/usr/share/photoalbum/templates/minimal
    -
    - # Includes a .tar of the incoming dir in the dist, can be yes or no
    --TARBALL_INCLUDE=yes
    -+TARBALL_INCLUDE=no
    - TARBALL_SUFFIX=.tar
    - TAR_OPTS='-c'
    -
    -
    -So I changed the album title, adjusted some image and thumbnail dimensions, and I want all images to be randomly shuffled every time the album is generated! I also have all my photos in my Nextcloud Photo directory and don't want to copy them to the local incoming directory. Also, a tarball containing the whole album as a download isn't provided.
    -
    -

    Generating the static photo album


    -
    -Let's generate it. Depending on the image sizes and count, the following step may take a while.
    -
    -
    -% make
    -photoalbum generate photoalbumrc
    -Processing 1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg to /home/paul/irregular.ninja/dist/photos/1055079_cool-water-wallpapers-hd-hd-desktop-wal.jpg
    -Processing 11271242324.jpg to /home/paul/irregular.ninja/dist/photos/11271242324.jpg
    -Processing 11271306683.jpg to /home/paul/irregular.ninja/dist/photos/11271306683.jpg
    -Processing 13950707932.jpg to /home/paul/irregular.ninja/dist/photos/13950707932.jpg
    -Processing 14077406487.jpg to /home/paul/irregular.ninja/dist/photos/14077406487.jpg
    -Processing 14859380100.jpg to /home/paul/irregular.ninja/dist/photos/14859380100.jpg
    -Processing 14869239578.jpg to /home/paul/irregular.ninja/dist/photos/14869239578.jpg
    -Processing 14879132910.jpg to /home/paul/irregular.ninja/dist/photos/14879132910.jpg
    -.
    -.
    -.
    -Generating /home/paul/irregular.ninja/dist/html/7-4.html
    -Creating thumb /home/paul/irregular.ninja/dist/thumbs/20211130_091051.jpg
    -Creating blur /home/paul/irregular.ninja/dist/blurs/20211130_091051.jpg
    -Generating /home/paul/irregular.ninja/dist/html/page-7.html
    -Generating /home/paul/irregular.ninja/dist/html/7-5.html
    -Generating /home/paul/irregular.ninja/dist/html/7-5.html
    -Generating /home/paul/irregular.ninja/dist/html/7-5.html
    -Creating thumb /home/paul/irregular.ninja/dist/thumbs/DSCF0188.JPG
    -Creating blur /home/paul/irregular.ninja/dist/blurs/DSCF0188.JPG
    -Generating /home/paul/irregular.ninja/dist/html/page-7.html
    -Generating /home/paul/irregular.ninja/dist/html/7-6.html
    -Generating /home/paul/irregular.ninja/dist/html/7-6.html
    -Generating /home/paul/irregular.ninja/dist/html/7-6.html
    -Creating thumb /home/paul/irregular.ninja/dist/thumbs/P3500897-01.jpg
    -Creating blur /home/paul/irregular.ninja/dist/blurs/P3500897-01.jpg
    -.
    -.
    -.
    -Generating /home/paul/irregular.ninja/dist/html/8-0.html
    -Generating /home/paul/irregular.ninja/dist/html/8-41.html
    -Generating /home/paul/irregular.ninja/dist/html/9-0.html
    -Generating /home/paul/irregular.ninja/dist/html/9-41.html
    -Generating /home/paul/irregular.ninja/dist/html/index.html
    -Generating /home/paul/irregular.ninja/dist/.//index.html
    -
    -
    -The result will be in the distribution directory ./dist. This directory is publishable to the inter-web:
    -
    -
    -% ls ./dist
    -blurs  html  index.html  photos  thumbs
    -
    -
    -I usually do that via rsync to my web server (I use OpenBSD with the standard httpd web server, btw.), which is as simple as:
    -
    -
    -% rsync --delete -av ./dist/. admin@blowfish.buetow.org:/var/www/htdocs/irregular.ninja/
    -
    -
    -Have a look at the end result here:
    -
    -https://irregular.ninja
    -
    -PS: There's also a server-side synchronisation script mirroring the same content to another server for high availability reasons (out of scope for this blog post).
    -
    -

    Cleaning it up


    -
    -A simple make clean will clean up the ./dist directory and all other (if any) temp files created.
    -
    -

    HTML templates


    -
    -Poke around in this source directory. You will find a bunch of Bash-HTML template files. You could tweak them to your liking.
    -
    -

    Conclusion


    -
    -A decent looking (in my opinion, at least) in less than 500 (273 as of this writing, to be precise) lines of Bash code and with minimal dependencies; what more do you want? How many LOCs would this be in Raku with the same functionality (can it be sub-100?).
    -
    -Also, I like the CSS effects which I recently added. In particular, for the Irregular Ninja site, I randomly shuffled the CSS effects you see. The background blur images are the same but rotated 180 degrees and blurred out.
    -
    -photoalbum.sh source code on Codeberg.
    -
    -E-Mail your comments to paul@nospam.buetow.org :-)
    -
    -Other Bash and KISS-related posts are:
    -
    -2025-09-14 Bash Golf Part 4
    -2024-04-01 KISS high-availability with OpenBSD
    -2023-12-10 Bash Golf Part 3
    -2023-10-29 KISS static web photo albums with photoalbum.sh (You are currently reading this)
    -2023-06-01 KISS server monitoring with Gogios
    -2022-01-01 Bash Golf Part 2
    -2021-11-29 Bash Golf Part 1
    -2021-09-12 Keep it simple and stupid
    -2021-06-05 Gemtexter - One Bash script to rule it all
    -2021-05-16 Personal Bash coding style guide
    -
    Back to the main site
    -- cgit v1.2.3