From 1f3253cfc352e31ebbc736496ab3a34d80a7a2a7 Mon Sep 17 00:00:00 2001 From: Paul Buetow Date: Tue, 27 Jan 2026 10:10:18 +0200 Subject: Update content for html --- about/resources.html | 206 ++++++++++----------- ...5-07-14-f3s-kubernetes-with-freebsd-part-6.html | 140 +++++++------- gemfeed/atom.xml | 142 +++++++------- index.html | 2 +- uptime-stats.html | 2 +- 5 files changed, 246 insertions(+), 246 deletions(-) diff --git a/about/resources.html b/about/resources.html index 10bf01d5..210a2fff 100644 --- a/about/resources.html +++ b/about/resources.html @@ -50,68 +50,68 @@ In random order:


Technical references



I didn't read them from the beginning to the end, but I am using them to look up things. The books are in random order:


Self-development and soft-skills books



@@ -119,43 +119,43 @@

Here are notes of mine for some of the books

@@ -164,31 +164,31 @@ Some of these were in-person with exams; others were online learning lectures only. In random order:


Technical guides



These are not whole books, but guides (smaller or larger) which I found very useful. in random order:


Podcasts



@@ -197,51 +197,51 @@ In random order:


Podcasts I liked



I liked them but am not listening to them anymore. The podcasts have either "finished" (no more episodes) or I stopped listening to them due to time constraints or a shift in my interests.


Newsletters I like



This is a mix of tech and non-tech newsletters I am subscribed to. In random order:


Magazines I like(d)



@@ -249,9 +249,9 @@

Formal education



diff --git a/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html b/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html index 44ad6311..7e293951 100644 --- a/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html +++ b/gemfeed/2025-07-14-f3s-kubernetes-with-freebsd-part-6.html @@ -34,9 +34,6 @@
  • f3s: Kubernetes with FreeBSD - Part 6: Storage
  • Introduction
  • Additional storage capacity
  • -
  • Update: Upgrade to 4TB drives
  • -
  • ⇢ ⇢ Upgrading f1 (simpler approach)
  • -
  • ⇢ ⇢ Upgrading f0 (using ZFS resilvering)
  • ZFS encryption keys
  • ⇢ ⇢ UFS on USB keys
  • ⇢ ⇢ Generating encryption keys
  • @@ -79,6 +76,9 @@
  • ⇢ ⇢ Testing NFS Mount with Stunnel
  • ⇢ ⇢ Testing CARP Failover with mounted clients and stale file handles:
  • ⇢ ⇢ Complete Failover Test
  • +
  • Update: Upgrade to 4TB drives
  • +
  • ⇢ ⇢ Upgrading f1 (simpler approach)
  • +
  • ⇢ ⇢ Upgrading f0 (using ZFS resilvering)
  • Conclusion
  • Future Storage Explorations
  • ⇢ ⇢ MinIO for S3-Compatible Object Storage
  • @@ -142,73 +142,6 @@ http://www.gnu.org/software/src-highlite --> <CT1000BX500SSD1 M6CR072> at scbus1 target 0 lun 0 (pass1,ada1)
    -

    Update: Upgrade to 4TB drives


    -
    -Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node:
    -
    -

    Upgrading f1 (simpler approach)


    -
    -Since f1 is the replication sink, the upgrade was straightforward:
    -
    -
    -

    Upgrading f0 (using ZFS resilvering)


    -
    -For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:
    -
    -
    - -
    paul@f0:~ % doas zpool online -e /dev/ada1
    -
    -
    -
    -This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:
    -
    - -
    paul@f0:~ % doas zpool list
    -NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    -zdata  3.63T   677G  2.97T        -         -     3%    18%  1.00x    ONLINE  -
    -zroot   472G  68.4G   404G        -         -    13%    14%  1.00x    ONLINE  -
    -
    -paul@f0:~ % doas camcontrol devlist
    -<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    -<SD Ultra 3D 4TB 530500WD>         at scbus1 target 0 lun 0 (pass1,ada1)
    -<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    -
    -
    -We're still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:
    -
    - -
    paul@f1:~ % doas camcontrol devlist
    -<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    -<WD Blue SA510 2.5 4TB 530500WD>   at scbus1 target 0 lun 0 (pass1,ada1)
    -<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    -
    -

    ZFS encryption keys



    ZFS native encryption requires encryption keys to unlock datasets. We need a secure method to store these keys that balances security with operational needs:
    @@ -2097,6 +2030,73 @@ Jul 06 10:Applications should handle brief NFS errors gracefully
  • For zero-downtime requirements, consider synchronous replication or distributed storage (see "Future storage explorations" section later in this blog post)

  • +

    Update: Upgrade to 4TB drives


    +
    +Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node!
    +
    +

    Upgrading f1 (simpler approach)


    +
    +Since f1 is the replication sink, the upgrade was straightforward:
    +
    +
    +

    Upgrading f0 (using ZFS resilvering)


    +
    +For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:
    +
    +
    + +
    paul@f0:~ % doas zpool online -e /dev/ada1
    +
    +
    +
    +This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:
    +
    + +
    paul@f0:~ % doas zpool list
    +NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    +zdata  3.63T   677G  2.97T        -         -     3%    18%  1.00x    ONLINE  -
    +zroot   472G  68.4G   404G        -         -    13%    14%  1.00x    ONLINE  -
    +
    +paul@f0:~ % doas camcontrol devlist
    +<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    +<SD Ultra 3D 4TB 530500WD>         at scbus1 target 0 lun 0 (pass1,ada1)
    +<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    +
    +
    +We're still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:
    +
    + +
    paul@f1:~ % doas camcontrol devlist
    +<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    +<WD Blue SA510 2.5 4TB 530500WD>   at scbus1 target 0 lun 0 (pass1,ada1)
    +<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    +
    +

    Conclusion



    We've built a robust, encrypted storage system for our FreeBSD-based Kubernetes cluster that provides:
    diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index eedbf0d9..5a7541e1 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,6 +1,6 @@ - 2026-01-27T09:57:01+02:00 + 2026-01-27T10:09:14+02:00 foo.zone feed To be in the .zone! @@ -6480,9 +6480,6 @@ content = "{CODE}"
  • f3s: Kubernetes with FreeBSD - Part 6: Storage
  • Introduction
  • Additional storage capacity
  • -
  • Update: Upgrade to 4TB drives
  • -
  • ⇢ ⇢ Upgrading f1 (simpler approach)
  • -
  • ⇢ ⇢ Upgrading f0 (using ZFS resilvering)
  • ZFS encryption keys
  • ⇢ ⇢ UFS on USB keys
  • ⇢ ⇢ Generating encryption keys
  • @@ -6525,6 +6522,9 @@ content = "{CODE}"
  • ⇢ ⇢ Testing NFS Mount with Stunnel
  • ⇢ ⇢ Testing CARP Failover with mounted clients and stale file handles:
  • ⇢ ⇢ Complete Failover Test
  • +
  • Update: Upgrade to 4TB drives
  • +
  • ⇢ ⇢ Upgrading f1 (simpler approach)
  • +
  • ⇢ ⇢ Upgrading f0 (using ZFS resilvering)
  • Conclusion
  • Future Storage Explorations
  • ⇢ ⇢ MinIO for S3-Compatible Object Storage
  • @@ -6588,73 +6588,6 @@ http://www.gnu.org/software/src-highlite --> <CT1000BX500SSD1 M6CR072> at scbus1 target 0 lun 0 (pass1,ada1)
    -

    Update: Upgrade to 4TB drives


    -
    -Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node:
    -
    -

    Upgrading f1 (simpler approach)


    -
    -Since f1 is the replication sink, the upgrade was straightforward:
    -
    -
      -
    • 1. Physically replaced the 1TB drive with the 4TB drive
    • -
    • 2. Re-setup the drive as described earlier in this blog post
    • -
    • 3. Re-replicated all data from f0 to f1 via zrepl
    • -
    • 4. Reloaded the encryption keys as described in this blog post
    • -
    • 5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)
    • -

    -

    Upgrading f0 (using ZFS resilvering)


    -
    -For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:
    -
    -
      -
    • 1. Plugged the new 4TB drive into an external USB SSD drive reader
    • -
    • 2. Attached the 4TB drive to the zdata pool for resilvering
    • -
    • 3. Once resilvering completed, detached the 1TB drive from the zdata pool
    • -
    • 4. Shutdown f0 and physically replaced the internal drive
    • -
    • 5. Booted with the new drive in place
    • -
    • 6. Expanded the pool to use the full 4TB capacity:
    • -

    - -
    paul@f0:~ % doas zpool online -e /dev/ada1
    -
    -
    -
      -
    • 7. Reloaded the encryption keys as described in this blog post
    • -
    • 8. Set the mount point again for the encrypted dataset
    • -

    -This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:
    -
    - -
    paul@f0:~ % doas zpool list
    -NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    -zdata  3.63T   677G  2.97T        -         -     3%    18%  1.00x    ONLINE  -
    -zroot   472G  68.4G   404G        -         -    13%    14%  1.00x    ONLINE  -
    -
    -paul@f0:~ % doas camcontrol devlist
    -<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    -<SD Ultra 3D 4TB 530500WD>         at scbus1 target 0 lun 0 (pass1,ada1)
    -<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    -
    -
    -We're still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:
    -
    - -
    paul@f1:~ % doas camcontrol devlist
    -<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    -<WD Blue SA510 2.5 4TB 530500WD>   at scbus1 target 0 lun 0 (pass1,ada1)
    -<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    -
    -

    ZFS encryption keys



    ZFS native encryption requires encryption keys to unlock datasets. We need a secure method to store these keys that balances security with operational needs:
    @@ -8543,6 +8476,73 @@ Jul 06 10:Applications should handle brief NFS errors gracefully
  • For zero-downtime requirements, consider synchronous replication or distributed storage (see "Future storage explorations" section later in this blog post)

  • +

    Update: Upgrade to 4TB drives


    +
    +Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node!
    +
    +

    Upgrading f1 (simpler approach)


    +
    +Since f1 is the replication sink, the upgrade was straightforward:
    +
    +
      +
    • 1. Physically replaced the 1TB drive with the 4TB drive
    • +
    • 2. Re-setup the drive as described earlier in this blog post
    • +
    • 3. Re-replicated all data from f0 to f1 via zrepl
    • +
    • 4. Reloaded the encryption keys as described in this blog post
    • +
    • 5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)
    • +

    +

    Upgrading f0 (using ZFS resilvering)


    +
    +For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:
    +
    +
      +
    • 1. Plugged the new 4TB drive into an external USB SSD drive reader
    • +
    • 2. Attached the 4TB drive to the zdata pool for resilvering
    • +
    • 3. Once resilvering completed, detached the 1TB drive from the zdata pool
    • +
    • 4. Shutdown f0 and physically replaced the internal drive
    • +
    • 5. Booted with the new drive in place
    • +
    • 6. Expanded the pool to use the full 4TB capacity:
    • +

    + +
    paul@f0:~ % doas zpool online -e /dev/ada1
    +
    +
    +
      +
    • 7. Reloaded the encryption keys as described in this blog post
    • +
    • 8. Set the mount point again for the encrypted dataset
    • +

    +This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:
    +
    + +
    paul@f0:~ % doas zpool list
    +NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    +zdata  3.63T   677G  2.97T        -         -     3%    18%  1.00x    ONLINE  -
    +zroot   472G  68.4G   404G        -         -    13%    14%  1.00x    ONLINE  -
    +
    +paul@f0:~ % doas camcontrol devlist
    +<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    +<SD Ultra 3D 4TB 530500WD>         at scbus1 target 0 lun 0 (pass1,ada1)
    +<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    +
    +
    +We're still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:
    +
    + +
    paul@f1:~ % doas camcontrol devlist
    +<512GB SSD D910R170>               at scbus0 target 0 lun 0 (pass0,ada0)
    +<WD Blue SA510 2.5 4TB 530500WD>   at scbus1 target 0 lun 0 (pass1,ada1)
    +<Generic Flash Disk 8.07>          at scbus2 target 0 lun 0 (da0,pass2)
    +
    +

    Conclusion



    We've built a robust, encrypted storage system for our FreeBSD-based Kubernetes cluster that provides:
    diff --git a/index.html b/index.html index d6985922..92246748 100644 --- a/index.html +++ b/index.html @@ -13,7 +13,7 @@

    Hello!



    -This site was generated at 2026-01-27T09:57:01+02:00 by Gemtexter
    +This site was generated at 2026-01-27T10:09:14+02:00 by Gemtexter

    Welcome to the foo.zone!

    diff --git a/uptime-stats.html b/uptime-stats.html index ea873c4d..643bbd65 100644 --- a/uptime-stats.html +++ b/uptime-stats.html @@ -13,7 +13,7 @@

    My machine uptime stats



    -This site was last updated at 2026-01-27T09:57:00+02:00
    +This site was last updated at 2026-01-27T10:09:14+02:00

    The following stats were collected via uptimed on all of my personal computers over many years and the output was generated by guprecords, the global uptime records stats analyser of mine.

    -- cgit v1.2.3