diff options
| author | Paul Buetow <paul@buetow.org> | 2026-01-27 09:58:04 +0200 |
|---|---|---|
| committer | Paul Buetow <paul@buetow.org> | 2026-01-27 09:58:04 +0200 |
| commit | 67fd9e7b66b3292197a92737af15308999c1afb8 (patch) | |
| tree | 65a2b9ad91407f7a696efd7075596df6c0bfd674 /gemfeed/atom.xml | |
| parent | ae00cae4798535519de0d9c5d04537a706c3d3d8 (diff) | |
Update content for html
Diffstat (limited to 'gemfeed/atom.xml')
| -rw-r--r-- | gemfeed/atom.xml | 48 |
1 files changed, 27 insertions, 21 deletions
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml index bf41e311..eedbf0d9 100644 --- a/gemfeed/atom.xml +++ b/gemfeed/atom.xml @@ -1,6 +1,6 @@ <?xml version="1.0" encoding="utf-8"?> <feed xmlns="http://www.w3.org/2005/Atom"> - <updated>2026-01-27T09:50:13+02:00</updated> + <updated>2026-01-27T09:57:01+02:00</updated> <title>foo.zone feed</title> <subtitle>To be in the .zone!</subtitle> <link href="https://foo.zone/gemfeed/atom.xml" rel="self" /> @@ -6480,6 +6480,9 @@ content = "{CODE}" <li><a href='#f3s-kubernetes-with-freebsd---part-6-storage'>f3s: Kubernetes with FreeBSD - Part 6: Storage</a></li> <li>⇢ <a href='#introduction'>Introduction</a></li> <li>⇢ <a href='#additional-storage-capacity'>Additional storage capacity</a></li> +<li>⇢ <a href='#update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</a></li> +<li>⇢ ⇢ <a href='#upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</a></li> +<li>⇢ ⇢ <a href='#upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</a></li> <li>⇢ <a href='#zfs-encryption-keys'>ZFS encryption keys</a></li> <li>⇢ ⇢ <a href='#ufs-on-usb-keys'>UFS on USB keys</a></li> <li>⇢ ⇢ <a href='#generating-encryption-keys'>Generating encryption keys</a></li> @@ -6585,31 +6588,33 @@ http://www.gnu.org/software/src-highlite --> <CT1000BX500SSD1 M6CR072> at scbus1 target <font color="#000000">0</font> lun <font color="#000000">0</font> (pass1,ada1) </pre> <br /> -<span class='quote'>Update: 27.01.2026</span><br /> +<h2 style='display: inline' id='update-upgrade-to-4tb-drives'>Update: Upgrade to 4TB drives</h2><br /> <br /> -<span>I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node:</span><br /> +<span class='quote'>Update: 27.01.2026 I have since replaced the 1TB drives with 4TB drives for more storage capacity. The upgrade procedure was different for each node:</span><br /> <br /> -<span>**Upgrading f1 (simpler approach):**</span><br /> +<h3 style='display: inline' id='upgrading-f1-simpler-approach'>Upgrading f1 (simpler approach)</h3><br /> <br /> <span>Since f1 is the replication sink, the upgrade was straightforward:</span><br /> <br /> -<span>1. Physically replaced the 1TB drive with the 4TB drive</span><br /> -<span>2. Re-setup the drive as described earlier in this blog post</span><br /> -<span>3. Re-replicated all data from f0 to f1 via zrepl</span><br /> -<span>4. Reloaded the encryption keys as described in this blog post</span><br /> -<span>5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)</span><br /> -<br /> -<span>**Upgrading f0 (using ZFS resilvering):**</span><br /> +<ul> +<li>1. Physically replaced the 1TB drive with the 4TB drive</li> +<li>2. Re-setup the drive as described earlier in this blog post</li> +<li>3. Re-replicated all data from f0 to f1 via zrepl</li> +<li>4. Reloaded the encryption keys as described in this blog post</li> +<li>5. Set the mount point again for the encrypted dataset, explicitly as read-only (since f1 is the replication sink)</li> +</ul><br /> +<h3 style='display: inline' id='upgrading-f0-using-zfs-resilvering'>Upgrading f0 (using ZFS resilvering)</h3><br /> <br /> <span>For f0, which is the primary storage node, I used ZFS resilvering to avoid data loss:</span><br /> <br /> -<span>1. Plugged the new 4TB drive into an external USB SSD drive reader</span><br /> -<span>2. Attached the 4TB drive to the zdata pool for resilvering</span><br /> -<span>3. Once resilvering completed, detached the 1TB drive from the zdata pool</span><br /> -<span>4. Shutdown f0 and physically replaced the internal drive</span><br /> -<span>5. Booted with the new drive in place</span><br /> -<span>6. Expanded the pool to use the full 4TB capacity:</span><br /> -<br /> +<ul> +<li>1. Plugged the new 4TB drive into an external USB SSD drive reader</li> +<li>2. Attached the 4TB drive to the zdata pool for resilvering</li> +<li>3. Once resilvering completed, detached the 1TB drive from the zdata pool</li> +<li>4. Shutdown f0 and physically replaced the internal drive</li> +<li>5. Booted with the new drive in place</li> +<li>6. Expanded the pool to use the full 4TB capacity:</li> +</ul><br /> <!-- Generator: GNU source-highlight 3.1.9 by Lorenzo Bettini http://www.lorenzobettini.it @@ -6617,9 +6622,10 @@ http://www.gnu.org/software/src-highlite --> <pre>paul@f0:~ % doas zpool online -e /dev/ada<font color="#000000">1</font> </pre> <br /> -<span>7. Reloaded the encryption keys as described in this blog post</span><br /> -<span>8. Set the mount point again for the encrypted dataset</span><br /> -<br /> +<ul> +<li>7. Reloaded the encryption keys as described in this blog post</li> +<li>8. Set the mount point again for the encrypted dataset</li> +</ul><br /> <span>This was a one-time effort on both nodes - after a reboot, everything was remembered and came up normally. Here are the updated outputs:</span><br /> <br /> <!-- Generator: GNU source-highlight 3.1.9 |
