summaryrefslogtreecommitdiff
path: root/gemfeed/atom.xml
diff options
context:
space:
mode:
Diffstat (limited to 'gemfeed/atom.xml')
-rw-r--r--gemfeed/atom.xml142
1 files changed, 71 insertions, 71 deletions
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index eedbf0d9..5a7541e1 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2026-01-27T09:57:01+02:00</updated>
+ <updated>2026-01-27T10:09:14+02:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="https://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -6480,9 +6480,6 @@ content = "{CODE}"
<li><a href='#f3s-kubernetes-with-freebsd---part-6-storage'>f3s: Kubernetes with FreeBSD - Part 6: Storage</a></li>
<li>⇢ <a href='#introduction'>Introduction</a></li>
<li>⇢ <a href='#additional-storage-capacity'>Additional storage capacity</a></li>
-<li>⇢ <a href='#update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</a></li>
-<li>⇢ ⇢ <a href='#upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</a></li>
-<li>⇢ ⇢ <a href='#upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</a></li>
<li>⇢ <a href='#zfs-encryption-keys'>ZFS encryption keys</a></li>
<li>⇢ ⇢ <a href='#ufs-on-usb-keys'>UFS on USB keys</a></li>
<li>⇢ ⇢ <a href='#generating-encryption-keys'>Generating encryption keys</a></li>
@@ -6525,6 +6522,9 @@ content = "{CODE}"
<li>⇢ ⇢ <a href='#testing-nfs-mount-with-stunnel'>Testing NFS Mount with Stunnel</a></li>
<li>⇢ ⇢ <a href='#testing-carp-failover-with-mounted-clients-and-stale-file-handles'>Testing CARP Failover with mounted clients and stale file handles:</a></li>
<li>⇢ ⇢ <a href='#complete-failover-test'>Complete Failover Test</a></li>
+<li>⇢ <a href='#update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</a></li>
+<li>⇢ ⇢ <a href='#upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</a></li>
+<li>⇢ ⇢ <a href='#upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</a></li>
<li>⇢ <a href='#conclusion'>Conclusion</a></li>
<li>⇢ <a href='#future-storage-explorations'>Future Storage Explorations</a></li>
<li>⇢ ⇢ <a href='#minio-for-s3-compatible-object-storage'>MinIO for S3-Compatible Object Storage</a></li>
@@ -6588,73 +6588,6 @@ http://www.gnu.org/software/src-highlite -->
&lt;CT1000BX500SSD1 M6CR072&gt; at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
</pre>
<br />
-<h2 style='display: inline' id='update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</h2><br />
-<br />
-<span class='quote'>Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node:</span><br />
-<br />
-<h3 style='display: inline' id='upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</h3><br />
-<br />
-<span>Since f1 is the replication sink, the upgrade was straightforward:</span><br />
-<br />
-<ul>
-<li>1. Physically replaced the 1TB drive with the 4TB drive</li>
-<li>2. Re-setup the drive as described earlier in this blog post</li>
-<li>3. Re-replicated all data from f0 to f1 via zrepl</li>
-<li>4. Reloaded the encryption keys as described in this blog post</li>
-<li>5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)</li>
-</ul><br />
-<h3 style='display: inline' id='upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</h3><br />
-<br />
-<span>For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:</span><br />
-<br />
-<ul>
-<li>1. Plugged the new 4TB drive into an external USB SSD drive reader</li>
-<li>2. Attached the 4TB drive to the zdata pool for resilvering</li>
-<li>3. Once resilvering completed, detached the 1TB drive from the zdata pool</li>
-<li>4. Shutdown f0 and physically replaced the internal drive</li>
-<li>5. Booted with the new drive in place</li>
-<li>6. Expanded the pool to use the full 4TB capacity:</li>
-</ul><br />
-<!-- Generator: GNU source-highlight 3.1.9
-by Lorenzo Bettini
-http://www.lorenzobettini.it
-http://www.gnu.org/software/src-highlite -->
-<pre>paul@f0:~ % doas zpool online -e /dev/ada<font color="#000000">1</font>
-</pre>
-<br />
-<ul>
-<li>7. Reloaded the encryption keys as described in this blog post</li>
-<li>8. Set the mount point again for the encrypted dataset</li>
-</ul><br />
-<span>This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:</span><br />
-<br />
-<!-- Generator: GNU source-highlight 3.1.9
-by Lorenzo Bettini
-http://www.lorenzobettini.it
-http://www.gnu.org/software/src-highlite -->
-<pre>paul@f0:~ % doas zpool list
-NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
-zdata <font color="#000000">3</font>.63T 677G <font color="#000000">2</font>.97T - - <font color="#000000">3</font>% <font color="#000000">18</font>% <font color="#000000">1</font>.00x ONLINE -
-zroot 472G <font color="#000000">68</font>.4G 404G - - <font color="#000000">13</font>% <font color="#000000">14</font>% <font color="#000000">1</font>.00x ONLINE -
-
-paul@f0:~ % doas camcontrol devlist
-&lt;512GB SSD D910R170&gt; at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
-&lt;SD Ultra 3D 4TB 530500WD&gt; at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
-&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt; at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
-</pre>
-<br />
-<span>We&#39;re still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:</span><br />
-<br />
-<!-- Generator: GNU source-highlight 3.1.9
-by Lorenzo Bettini
-http://www.lorenzobettini.it
-http://www.gnu.org/software/src-highlite -->
-<pre>paul@f1:~ % doas camcontrol devlist
-&lt;512GB SSD D910R170&gt; at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
-&lt;WD Blue SA510 <font color="#000000">2.5</font> 4TB 530500WD&gt; at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
-&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt; at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
-</pre>
-<br />
<h2 style='display: inline' id='zfs-encryption-keys'>ZFS encryption keys</h2><br />
<br />
<span>ZFS native encryption requires encryption keys to unlock datasets. We need a secure method to store these keys that balances security with operational needs:</span><br />
@@ -8543,6 +8476,73 @@ Jul <font color="#000000">06</font> <font color="#000000">10</font>:<font color=
<li>Applications should handle brief NFS errors gracefully</li>
<li>For zero-downtime requirements, consider synchronous replication or distributed storage (see "Future storage explorations" section later in this blog post)</li>
</ul><br />
+<h2 style='display: inline' id='update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</h2><br />
+<br />
+<span class='quote'>Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node!</span><br />
+<br />
+<h3 style='display: inline' id='upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</h3><br />
+<br />
+<span>Since f1 is the replication sink, the upgrade was straightforward:</span><br />
+<br />
+<ul>
+<li>1. Physically replaced the 1TB drive with the 4TB drive</li>
+<li>2. Re-setup the drive as described earlier in this blog post</li>
+<li>3. Re-replicated all data from f0 to f1 via zrepl</li>
+<li>4. Reloaded the encryption keys as described in this blog post</li>
+<li>5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)</li>
+</ul><br />
+<h3 style='display: inline' id='upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</h3><br />
+<br />
+<span>For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:</span><br />
+<br />
+<ul>
+<li>1. Plugged the new 4TB drive into an external USB SSD drive reader</li>
+<li>2. Attached the 4TB drive to the zdata pool for resilvering</li>
+<li>3. Once resilvering completed, detached the 1TB drive from the zdata pool</li>
+<li>4. Shutdown f0 and physically replaced the internal drive</li>
+<li>5. Booted with the new drive in place</li>
+<li>6. Expanded the pool to use the full 4TB capacity:</li>
+</ul><br />
+<!-- Generator: GNU source-highlight 3.1.9
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre>paul@f0:~ % doas zpool online -e /dev/ada<font color="#000000">1</font>
+</pre>
+<br />
+<ul>
+<li>7. Reloaded the encryption keys as described in this blog post</li>
+<li>8. Set the mount point again for the encrypted dataset</li>
+</ul><br />
+<span>This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:</span><br />
+<br />
+<!-- Generator: GNU source-highlight 3.1.9
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre>paul@f0:~ % doas zpool list
+NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+zdata <font color="#000000">3</font>.63T 677G <font color="#000000">2</font>.97T - - <font color="#000000">3</font>% <font color="#000000">18</font>% <font color="#000000">1</font>.00x ONLINE -
+zroot 472G <font color="#000000">68</font>.4G 404G - - <font color="#000000">13</font>% <font color="#000000">14</font>% <font color="#000000">1</font>.00x ONLINE -
+
+paul@f0:~ % doas camcontrol devlist
+&lt;512GB SSD D910R170&gt; at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
+&lt;SD Ultra 3D 4TB 530500WD&gt; at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
+&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt; at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
+</pre>
+<br />
+<span>We&#39;re still using different SSD models on f1 (WD Blue SA510 4TB) to avoid simultaneous failures:</span><br />
+<br />
+<!-- Generator: GNU source-highlight 3.1.9
+by Lorenzo Bettini
+http://www.lorenzobettini.it
+http://www.gnu.org/software/src-highlite -->
+<pre>paul@f1:~ % doas camcontrol devlist
+&lt;512GB SSD D910R170&gt; at scbus0 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass0,ada0)
+&lt;WD Blue SA510 <font color="#000000">2.5</font> 4TB 530500WD&gt; at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1)
+&lt;Generic Flash Disk <font color="#000000">8.07</font>&gt; at scbus2 target <font color="#000000">0</font> lun <font color="#000000">0</font> (da0,pass2)
+</pre>
+<br />
<h2 style='display: inline' id='conclusion'>Conclusion</h2><br />
<br />
<span>We&#39;ve built a robust, encrypted storage system for our FreeBSD-based Kubernetes cluster that provides:</span><br />